title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 5
8
| search_term
stringclasses 44
values | text
stringlengths 0
8.58M
|
---|---|---|---|---|
Commentary on: Megyesi MS, Nawrocki SP, Haskell NH. Using accumulated degree‐days to estimate the postmortem interval from decomposed human remains.
|
6f5e252f-b148-45be-abd1-96e5cb65537e
|
10092795
|
Forensic Medicine[mh]
|
This project was not supported by any external funding.
|
Environmental microbiology going computational—Predictive ecology and unpredicted discoveries
|
67905c48-0d55-4e51-b4c2-10278140f7e0
|
10092848
|
Microbiology[mh]
|
The fields of microbial ecology and environmental microbiology are producing loads of data, mainly nucleic acid sequence data due to the extensive use of amplicon sequencing and metagenomics, and an increasing use of transcriptomics. To increase our understanding of microorganisms in terrestrial ecosystems, multiple, concerted efforts to collect large numbers of samples for analyses of microbial communities were initiated already more than 15 years ago (Fierer & Jackson, ; Lozupone & Knight, ) but have really exploded the last years, with The Earth Microbiome Project Consortium being one of the first major endeavours for bacteria across all biomes (Thompson et al., ) and the work by Tedersoo et al. ( ) for soil fungi. The majority of the investigations have a biogeography focus based on a single sampling occasion and the word ‘global’ is frequently used in the titles of these soil microbial catalogues and surveys (Bahram et al., ; Delgado‐Baquerizo et al., ; Gobbi et al., ). Similar efforts have been done for many other biomes. Although largely descriptive, they have contributed to a better understanding of microbial diversity and the distribution of microbial taxa and their functions at an unprecedented spatial scale. Further, correlative analyses have indicted direct or indirect drivers of the observed patterns as well as the role of microbial communities for ecosystem functioning (Bahram et al., ; Delgado‐Baquerizo et al., ; Garland et al., ). The massive amount of complex data is not only an opportunity but also a major challenge when it comes to meaningful interpretation. The field of computational biology, being the intersection of computer science and biology, is rapidly expanding and developing new methods for this purpose. Artificial intelligence (AI), including machine learning (ML) and to some extent also deep learning (DL) methods are promising for dealing with big data in microbial ecology and environmental microbiology (Ghannam & Techtmann, ; McElhinney et al., ). Especially ML approaches are increasingly adopted by ecologists and many of these methods will soon become routine tools for analyses of complex microbial omics data. They can be used to categorize and finds patterns in uncategorized data as well as analyse data that we know how to categorize. There are several advantages to using ML methods in microbiome studies, for example, they can deal with non‐linear relationships, make better use of the full depth of high‐dimensional data, and can be used to build predictive models based on environmental and community data. Predictive modelling is very attractive in microbial ecology. Among the ML methods, random forests have become frequently applied in microbiome studies in the last decade (Jones et al., ; Ryo & Rillig, ). It is predominantly used for the identification of the best predictors for a given response variable and has for example been used to rank the environmental variables determining the major microbial phyla in wetlands (Bahram et al., ) and the diversity of ammonia oxidizing archaea across European soils (Saghaï et al., ), as well as the relative importance of biotic and abiotic controls of nitrous oxide emissions from agricultural soils (Jones et al., ). Random forest modelling can be very useful when studying remote areas that are difficult to sample, as exemplified by climate projections on microbial communities in the Antarctic Ocean (Tonelli et al., ). RF models can also show how predictions change over the range of each individual predictor variable, thereby giving the possibility to identify thresholds or tipping points (Apley & Zhu, ; Saghaï et al., ). Already in 2012, artificial neural networks were used to incorporate interactions among community members in models for predictions of microbial community composition in time and space based on environmental data (Larsen et al., ). A similar approach was used to predict the maize rhizosphere community at different plant development stages or growth conditions (García‐Jiménez et al., ). This type of approach can potentially assist in the microbiome engineering of important crops. However, with sequencing costs being relatively cheap, there is an increasing interest in using AI and microbiome data for microbiome‐based diagnostics as a means to address environmental challenges and advance management practices (McElhinney et al., ). Two recent examples of the latter are the use of soil microbiome data to predict the propensity for specific plant diseases in agriculture (Yuan et al., ) and soil health metrics (Wilhelm et al., ), which can be laborious and expensive to measure. Combining ML and microbiome data has further shown promising in environmental monitoring, tracing of contaminants and predictions of environmental quality (Sperlea et al., ; Techtmann & Hazen, ; Wheeler, ), which allows us to move away from indicator taxa or microbial biomarkers and instead use the full breath of information encompassed by the microbial community in a given site or sample.
The large amounts of genetic data and corresponding meta‐data generated in microbiome studies are real treasures, especially when it comes to metagenomes and metatranscriptomes, and only a fraction of the information available has been explored. This data can be used for meta‐analyses to increase the scale of the study, but more importantly, it can be used to address other questions than those posed by the researchers that collected the original data. Making use of already published genome or sequence data in microbial ecology is not a new idea (Jones & Hallin, ) but now we have increasing possibilities to mine extremely large data sets (Coelho et al., ). Even more exciting are the possibilities to combine different types of data and information to go beyond the microbiome data. Integration of knowledge from diverse fields of research and the combination of microbiome data with other data from different sources have the potential to result in unexpected and unpredictable results, as well as new discoveries. A recent example of re‐using and combining data is the work by Ke et al. ( ), who reanalyzed data in published datasets on the effects of pesticide application on soil microbial communities combined with information on the physical and chemical properties of the pesticides. By developing a ML model, they were able to show that physical pesticide properties largely explain the ecological impact of the pesticide. This information can guide the design of pesticide molecules to minimize environmental risk. In the field of precision agriculture, researchers have proposed the integration of AI and nanotechnology with disparate datasets to enable the design of nanoscale agrochemicals for sustainable food production (Zhang et al., ). In another study, geographic and meteorological data as well plant‐traits, land‐use type and microbial community data were used in a ML‐based prediction of grassland degradation, which is a multi‐factorial phenomenon not easily captured by a few variables (Yan et al., ). Combining datasets and using computational approaches can also be used to develop new diagnostic tools. For example, de Andrade et al. ( ) suggest the development of a soil quality index based on soil microbiome data, crop productivity and a range of abiotic environmental factors to improve crop production systems using AI. Data‐driven research relying on large, multiple, complex datasets and computational methods and capacity, as exemplified above, indicates a new paradigm in microbial ecology, and ecology in general (McCallen et al., ). We can anticipate new insights, similar to the leaps taken after advanced bioinformatics and multi‐omics approaches became an integral part of microbial ecology research. Microbial ecology and environmental microbiology will follow the trajectory in life sciences and become increasingly computationally demanding, focusing on larger and also more complex sets of information. We are already seeing the laboratories being sparsely populated while students, postdocs, and researchers spend increasing amount of time in front of their computers organizing and analysing data. My crystal ball says that a shift towards a data‐driven rather than an experimental‐driven and data generating science, that depends on complex, big data, and advanced technologies, will be a game changer in microbial ecology and environmental microbiology. This development is already putting pressure on management, storage and sharing of data. Data‐driven microbial ecology research where different types of data are combined to consider the multidimensionality of ecosystems further suggests that students and researchers not only need to enhance their computational skills, but also skills in working interdisciplinary. Nevertheless, important discoveries should ideally be followed by experimental approaches to test hypothesis, determine causal relationships, and verify mechanisms. Already, experimental validation is definitely a bottleneck to close the circle in microbial ecology research and, although my crystal ball is a bit hazy here, it looks like this will become an even greater bottleneck in the era of big data and data‐drivenresearch in microbial ecology.
|
Influence of New Technology in Dental Care: A Public Health Perspective
|
ee9afe1d-1cf3-4786-9c20-5a73d2cf9845
|
10093858
|
Dental[mh]
| |
Stochastic Dynamic Mass Spectrometric Quantitative and Structural Analyses of Pharmaceutics and Biocides in Biota and Sewage Sludge
|
ed6b0fe9-8eb4-4eac-91d5-b738549cb8e8
|
10094044
|
Pharmacology[mh]
|
Biocides and antibiotics are major classes of chemicals used to control or prevent the growth of microorganisms such as fungi, mosses, bacteria, lichens, and algae . The therapeutics are efficacious at low concentration levels, due to specific interactions with single cellular targets. Conversely, biocides adsorb on a microbe’s surface or are utilized for suspension at higher concentration levels compared with their minimum inhibitory concentrations. Their monitoring in the aquatic environment is of primary and increasing concern as well . The main biocidal disposal route is via sewage systems and drains. The potential risk for bioaccumulation carries toxicological effects on the ecosystem . The major chemical compositions of cleaning substances, including disinfectants, cosmetics, and personal care products, are surfactants . Surfactants can be found in paints, polymer materials, fabrics, pesticides, and pharmaceutical products, in addition to oil, mining, and cellulose factories. They are emerging contaminants according to UNESCO . The monitoring of the environmental and wastewater pollution of surfactants is a primary research task. Quaternary ammonium surfactants or so-called quats are toxic to aquatic organisms at a level of concentration of 1 mg.L −1 . However, quaternary ammonium surfactants are not only toxic to environmental organisms, but also exhibit the proliferation of antibiotic resistance . In 2019, there were 4.95 million antibiotic-resistant pathogens around the world, which has been considered as a silent pandemic . The therapeutics’ abuse is a factor contributing to antibiotic resistance. It remains not well understood, but is recognized to be among the most pressing global environmental and human health problems . It has been found that by 2050, approximately ten million people shall die, annually, as a result of antibiotic resistance . Despite this, many quats play a vital role in biochemical pathways . Benzalkonium chlorides belong to the group of quaternary ammonium surfactants, and are extensively utilized for various industrial and domestic purposes , including contact lens solutions or eye drops . For this reason, BAC-C14 is the most frequently detected biocide among benzalkonium chlorides in engineered and environmental systems . Their presence in wastewater upset has led to activated sludge processes . Benzalkonium chloride derivatives in sludge are capable of biodegrading during biological wastewater treatment or adsorbing onto a biomass. Their biodegradation paths have been examined . The presence of benzalkonium chloride disinfectants in the environment promotes the abundance and diversity of antibiotic resistance genes in sewage sludge microbiomes . Antibiotics, as surfactants and environmental pollutants, represent a widespread concern and ecotoxicological risk . Atenolol and propranolol are β -blockers treating cardiovascular diseases . They are emerging pollutants, found in sewage effluents and surface water. Beta -adrenergic receptors, which are major target macromolecules of beta -blockers, were detected in aquatic animals and fish. There is an environmental effect of the pharmaceutics on physiological processes in wild animals. The same can be said for paracetamol (tylenol or acetaminophen), which is a commonly used anti-inflammatory, antipyretic, and analgesic medication . It can be found in tap water from the pharmaceutical industry and in urban and hospital waste . There have been developed innovative strategies for the regulation of pharmaceutics in surface and groundwater via European legislation (Directives 2000/60/EC, 2008/105/EC, 2009/90/EC, 2013/39/EU, and 2015/1787/EU ). The determination of pharmaceutics and biocides in biota, including the analysis of invertebrates, plankton, fish, human tissues and fluids, birds, and marine mammals, is importance for assessing the risk for human health and environmental damage , as well as determining the potential ecological risk index . The biomonitoring includes saltwater and freshwater mussels . These biological species filter large quantities of water, thus accumulating both organic and inorganic pollutants from water or suspended matter at elevated levels. In addition, mussels are (i) widely distributed in the environment; (ii) sessile biological species; (iii) thrive in highly polluted areas; and (iv) easily sampled . The biomonitoring of pollutants via mussels allows us (a) to determine the concentration levels of pollutants in feral organisms; (b) to examine the spatial (geographic) distribution of pollution, if any; and (c) to study the temporal distribution and variation of pollution toward distinct seasons. The toxicokinetics examines four major processes of biological objects such as mussels, involving (a) the adsorption of pollutants; (b) their distribution, (c) metabolism, and (d) excretion . The rates of the processes determine the contaminant’s bioavailability . Knowledge of the adsorption capability of pollutants is of importance, because the process is also used to decontaminate water. It plays a crucial role in methods for the removal of organics in wastewater treatment plants, thus highlighting the aerobic and anaerobic sludge systems . The importance of the development of methods for determining surfactants and antibiotics in mixtures in biological and environmental samples is due to the fact that benzalkonium chlorides, for example, are used in commercial pharmaceutical formulations . Catanionic mixtures of drug-surfactant aggregates such as benzalconium chlorides and β-blockers such as alprenolol, ATE, and PRO have been detailed . β-Blockers have been used to treat ocular hypertension and glaucoma. Their pharmaceutical formulations with benzalconium chlorides have been examined in skin creams or eye drops . Routinely, MS methods have been used to determine organic pollutants . The ultra-high resolving power, selectivity, accuracy, precision, and sensitivity of tandem mass spectrometry are irreplaceable, being used for the analysis of environmental and biological samples. However, MS cannot be used as a universal method for determining mixtures of quaternary ammonium derivatives simultaneously . Analytical mass spectrometry is a complex term, referring to three major research tasks of qualitative , quantitative , and structural analyses . However, pursuing the exact chemometrics (|r| = 1) is a challenging analytical task . Among theoretical model equations that are particularly relevant are those producing exact method performance. Accordingly, research effort has been devoted to developing MS methods for data processing, ensuring the quality and comparability of analytical information toward the interpretation of results , which is in agreement with Council Directives 96/23/EC and 2002/657/EC. However, depending on the analyte concentration in biological, foodstuff, and environmental samples, there is observed a decrease in method performance using classical methods for the data processing of MS measurands. How do we address this problem? It has been considered via innovative stochastic dynamic Equations (1) and (2), capable of exact quantifying analytes . Formula (2) is derived from Equation (1), where ‘I’ denotes the intensity of the MS peak. There are excluded imaging methods for quantifying analytes or approaches capable of spatially resolving the chemical composition of a surface sample . Formula (2) overcomes a set of difficult classical quantitative MS concepts . Superior chemometrics is explained with the help of Formula (2) to quantify exactly the fluctuations in measurands in a short period of scan time. It is capable of determining the 3D molecular and electronic structures of chemicals mass-spectrometrically, when it is used complementarily with Arrhenius’s Equation (3). The functionality D ″ SD = f ( D QC ) has shown |r| = 0.9999 4 . To summarize, the study deals with the quantitative and 3D structural stochastic dynamic MS biomonitoring of mixtures of antibiotics PARA, ATE, and PRO in the presence of benzalkonium chloride surfactants BAC-C12, BAC-C14, BAC-C16, and BAC-C18 in biota using mussel tissue, sludge cakes, and a treated effluent. (1) D S D t o t = ∑ i n D S D i = ∑ i n 1.3194.10 − 17 × A i × I i 2 ¯ − I i ¯ 2 I i − I i ¯ 2 ¯ (2) D S D ″ , t o t = ∑ i n D S D ″ , i = ∑ i n 2.6388.10 − 17 × I i 2 ¯ − I i ¯ 2 (3) D Q C = ∏ i = 1 3 N ν i 0 ∏ i = 1 3 N − 1 ν i s × e − Δ H # R × T 2.1. Mass Spectrometric Data 2.1.1. Mass Spectrometric Fragmentation Reactions of Paracetamol and Its d 3 -Derivative Paracetamol shows fragmentation path CID ( m / z 152)→152, (134,) 110 (see chemical diagrams of species in ). Its dimer [2M+H] + shows fragmentation path CID ( m / z 303)→303, 152 . The ammonium adduct [2M+NH 4 ] + exhibits a low-abundance peak at m / z 320 . MS analysis of PARA radical-cations has been reported . Fragmentation processes, involving a radical-cation mechanism of bond cleavage, have been proposed . PARA tends to stabilize not only the Cu 2+ adduct, but also adducts of alkali metal ions and NH 4 + cation. There are species of type [M+NH 4 ] + ( m / z 169), [M+Na] + ( m / z 174), [M+K] + ( m / z 190), [2M+NH 4 ] + ( m / z 320), [2M+Na] + ( m / z 325), and [2M+K] + ( m / z 341), respectively . As reveal, the abundance of peaks depends on the applied voltage, presence of formic acid, and analyte concentration. The same is true for the peak of protonated analyte [M+H] + and its major fragmentation product of N–C bond cleavage, [M-CH 3 CHC=O] + , at m / z 152 and 110. The data on d 3 -PARA show similar fragmentation patterns together with some adducts. The peak at m / z 331 belongs to [2.d 3 -M+Na] + . There is an observed fragmentation reaction CID ( m / z 152)→152, 110, 93 . MS spectra of PARA in a sludge cake and biota show competitive fragmentation mechanisms causing not only charged cations, but also cation radicals. 2.1.2. Mass Spectrometric Fragmentation Reactions of Biocides Surfactants BAC-C12, BAC-C14, BAC-C16, and BAC-C18 exhibit molecular cation [M] + . The major fragmentation path shows the loss of 92 Da of toluene . Regarding the MS spectra of PARA and its d 3 -PARA derivative , depending on the experimental conditions, there are competitive fragmentation reactions causing more than one conformational and tautomeric form of product ions . Although the major fragmentation MS path of these surfactants is associated with the loss of the hydrophilic head of the compounds, the product ion consisting of a charged hydrophobic tail exhibits a complex conformational preference and electronic effects . Therefore, after examining only experimental measurands in CID-MS/MS and SRM operation modes, a lack of assignment of observable peaks to ions remains. Consider the shape of the SRM spectrum of BAC-C12 and the proposed chemical 2D diagrams and electronic structures of ions at m / z 212 and 213. 2.1.3. Mass Spectrometric Fragmentation Reactions of β-Blockers Propranolol shows the [M+H] + cation at m / z 260 . The fragmentation paths depend on CE, pH, etc. . With increasing CE, there is a low-abundance ion at m / z 183, due to the cleavage of the [C 3 H 9 N] 0 fragment and solvent water . The peak at m / z 282 of the MS spectrum of PRO at CE = 30V is assigned to the [M+Na] + adduct. The same is true for ATE MS reactions . Their identical structural (2,3-dihydroxy-propyl)-isopropyl-ammonium fragment causes peaks at m / z 145, 105, 101, 83, and 64, respectively. ATE exhibits the [M+H] + cation at m / z 267. Quantitative analysis was carried out, examining the [M+H] + cation at m / z 260 and 267 of PRO and ATE in the mixture. Employment of the SRM and SIM modes leads to pairs of MS peaks at 260/261 and 267/268 . Classical quantitative methods look at average m / z data on MS peaks at m / z 260.5 (SRM) and 262.07 (SIM) (PRO), as well as 267.52 (SRM) and 267.82 (SIM) (ATE). The matrix affects significantly not only the m / z data compared with the results from the fragmentation paths of standard samples, but also product ions . Owing to the fact that there is used only an MS peak of the [M+H] + cation in both the cases, excluding a statistically representative set of fragmentation species, there is particular importance in accounting for the molecular conformations and electronic effects of protonated antibiotics in order to assign statistically different sets of m / z data on two SRM and SIM spectra of standard samples of antibiotics and their mixtures in environmental and biological matrixes. For the purposes of 3D structural analysis and empirical demonstration of the assignment of the [M+H] + cation at m / z 260 of PRO in the complex sample matrix, there is used a statistically representative set of MS peaks at m / z 260, 283, 157, 116, and 98 found in the MS spectrum in soil . The same fragmentation species have been found in the MS spectrum of PRO in biological samples . Owing to the proposed two competitive mechanisms of formation of the MS ion at m / z 183, our study examines the correlation between MS data and theoretical quantum chemical ones looking at ions 183 _a and 183 _b . The MS ion at m / z 116 in the CID-MS/MS spectrum of PRO has been observed examining tandem MS/MS processes of ATE and studying the ions’ CID interaction of the [M+H] + cation at m / z 267 . The peak at m / z 116 has been used to determine PRO in the liver, brain, and kidney thin tissue, as well . In addition to the peak at m / z 116, ATE exhibits a set of ions depending on the experimental conditions . These are peaks at m / z 225, 208, 190, 173, 162, and 145 . Moreover, there are peaks at m / z 133, 115, and 107 . 2.2. Determination of Stochastic Dynamic Diffusion Parameters The capability of Equation (1) in determining the 3D molecular structures of analytes, when it is used complementarily with Equation (3), has already been reviewed . In this light, herein, we prove its validity and compatibility with Equation (2). In verifying empirically the validity of Equation (1), we can explore the results from SRM data on the MS ion at m / z 110 of PARA of its MS/MS spectra of the [M+H] + cation at m / z 152 . The new data on variables of PARA show lnP1 = 17.0532. Therefore, Equation (1) shows that the MS law is valid for the temporal distribution of measurands of PARA, as well. Details of the statistical parameters A i are presented in . Calculation tasks have been discussed previously . The latter figure illustrates the relation between the D ′ SD and D″ SD parameters, showing |r| = 0.9995 3 . The deviation from |r| = 1 is a result of the error contribution of the data processing of the temporal distribution of intensity with respect to the scan time or function (I–<I>) 2 = f(t) fitted to the SineSqr function, thus producing statistical parameter A i . 2.3. Quantitative Data on Biocides We shall support our method by highlighting how the D″ SD parameters are determined per span of scan time. We shall justify the view that exact relations are obtained when there are quantified fluctuations of measurands with a short span of scan time. The question that we need to address is “Which criteria determine a set of MS measurands with respect to a concrete span of scan time as the true one?”, or which methods are used in order to validate the parameters of Equation (2). In doing so, we use data on the selected reaction monitoring mode of the PARA [M+H] + ion at m / z 152 of segments of the MS method, where segment (i) has collected a full mass scan set of variables. There are examined segment raw data QC_High_SRM_SEG_CE40_i.raw (i = 1–3) . lists the output, only, of the fragmentation ion at m / z 110 for the SRM operation mode, while list results from the CID-MS/MS spectra of PARA of its [M+H] + cation at m / z 152 depending on experimental conditions such as CE and the presence of formic acid. Chemometrics of the normality Shapiro–Wilk test together with ANOVA data in are summarized in . Data on quality control standard samples of a mixture of antibiotics QC_H_SRM_CE40_3 (segment 3, ) reveal three groups of m / z parameters that are mutually significantly different from the perspective of chemometrics . These are a subset of variables of segment 3, shown as QC_High_SRM_SEG_CE40_3_1, QC_High_SRM_SEG_CE40_3_2, and QC_High_SRM_SEG_CE40_3_3. The same is true for the recorded two sets of m / z variables of the same ion of segment (2) (QC_High_SRM_SEG_CE40_2_1 and QC_High_SRM_SEG_CE40_2_2). The chemometric analysis of datasets of measurands at m / z 110.1, i.e., QC_High_SRM_SEG_CE40_3_3, QC_High_SRM_SEG_CE40_2_1, and QC_High_SRM_SEG_CE40_1, is statistically equal. In other words, correlative analysis and determination of the D″ SD parameters of Equation (2) within the framework of three segments of MS spectra is carried out using those sets of measurands that are statistically significantly equal, or those with values at m / z 110.065. Therefore, there are distinguished quantitatively three sets of peaks at m / z 110: 110.84 49 ± 0.05985, 110.06 525 ± 0.04709, and 110.23885 ± 0.04898. The number of the subset of variables is not extensive when looking at whole datasets of average values over the whole period of measurement. Results from sector (3) (QC_High_SRM_SEG_CE40_3) show a value at m / z 110.0673. In other words, our approach does not ignore measurable sets of low-abundance variables and their fluctuations. The examples could be multiplied, in fact, without limit when looking at the dataset of measurands in environmental and biological samples of antibiotics and biocides. The results provide compelling evidence for the advantages of our method. The analysis of the effluent, treated sludge cake, and biota show that PARA and its d 3 -derivative in the presence of antibiotics or biocides reveal peaks at a range of m / z 107–115, which further complicate not only the qualitative assignment of products, but also the quantification of those single analytes in mixtures, as well as the deuterium exchange processes of d 3 -PARA, if any. Quantitative data on biocides in biota show r 2 = 0.9309–0.985 using the classical approach to quantify the average total intensity of MS peaks over the whole period of measurements . The results were data-processed via the ICIS algorithm of peak detection. The Savitzky–Golay smoothing function with baseline correction was used. The ICIS algorithm involves a trapezoidal integration approach . In order to account for the effect of the smoothing function on chemometrics, herein, we examined the same relationships, but applying baseline correction and TIA ( and ). illustrates relationships among the concentrations of BAC-C12 and BAC-C14 in biota showing |r|= 0.9874 5 and 0.9922 3 . Despite this, the ICSI or TIA methods show low |r| parameters and high sd(yEr±) for mean values. The linear equation obtained via the direct application of TIA of BAC-C14 is y = −16.1 3779 ± 14.6 133 + 6.5 4102 ± 0.3 3481 .x. The error contribution of the data-processing algorithm affects the main value of the intercept and slope of linear regression, respectively, and the correlation equations. The reliability of quantitative data is complicated when examining biota. There are competitive fragmentation reactions producing both mono-cations and cation radicals. The CID-MS spectra of PARA of freeze-dried biota show peaks at m / z 108.07 and 109.07. The wet sample shows MS ions at m / z 107.07, 110.07, and 111.1 ( , and and ). The mixtures of biocides show pairs of ions at m / z 211 and 212 (BAC-C12) and 239 and 240 (BAC-C14), instead of a single ion according to the common fragmentation scheme. The data processing of isotopologies in biota via the latter algorithms yields high sd(yEr±) values . Either by the ICIS or TIA algorithm, with or without baseline correction, the uncertainty of the analytical results is increased. The task can be completed precisely by Equation (2). ICIS and TIA are incapable of providing reliable analytical data on environmental and biological samples, particularly when there are tautomers and fragmentation reactions involving different molecular-level mechanisms, leading not only to cations, but also to cation radicals. Moreover, it is true that ca. 25% of pharmaceutics exist in more than one tautomeric form, in addition to the fact that almost all antibiotics are characterized by multiple ionizable protonation positions in their molecules . Reference concerns the same problem of quantifying LMW antibiotics in biological fluid. Thus, the following paragraph focuses on the quantitative analysis of biocides using Equation (2). We shall describe the advantages of Formula (2) compared with the results from the ICIS or TIA methods presented so far. summarizes m / z data on BAC-C12 and BAC-C14 in biota at concentrations c = 2–80 ng.mL –1 . ANOVA and t -tests show two sets of m / z values at 212.2 2211 ± 0.12988 and m / z 211.6 at c = 2 ng.mL −1 (BAC-C12), which are statistically significantly different. At c = 6 ng.mL −1 are distinguished three sets of measurands at m / z 213, 212, and 211.5 . There are three elemental compositions, molecular conformations, and electronic structures of BAC-C12 species (212.2 1401 ± 0.1001, 212.5 4356 ± 0.12232, and 211.7 86 ± 0.15203.) Datasets at m / z 212.22 211 ± 0.12988 and 212.2 1401 ± 0.1001 of BAC-C12 ions at c = 2 and 6 ng.mL −1 are statistically not significantly different. The ion at m / z 212.2 belongs to one and the same ion at two concentrations. Further, we shall come to see that Equation (2) accounts precisely for the fluctuations in the m / z and intensity data on MS peaks, thus producing excellent-to-exact quantification and 3D structural analysis, despite the complexity of the isotope shape. and and show that the quantification of biocide BAC-C12 in biota employing the D″ SD parameter and assessing the relationship ln[ D″ SD ] = f(conc.) yields |r| = 0.9999 1 –0.9905 8 , examining c = 2–80 ng.(mL) −1 . Conversely, the ICIS and TIA algorithms produce |r|= 0.98924 . There has been obtained |r| = 0.999 when studying the same set of analytes in sludge . Data on PARA and d 3 -PARA c = 5–400 ng.(mL) –1 show r 2 = 0.997. Quantification of BAC-C12 and BAC-C14 yields r 2 = 0.987 and 0.983 . The analysis of the peak at m / z 211.75 ± 0.15 of BAC-C12 ions in biota using the [M+H] + cation at m / z 304 ( and ) produces |r| = 0.9972 1 and 0.9841 1 when employing the equation D″ SD = f(conc.) Again, there is improved method performance. The analysis of BAC-C14 yields |r| = 0.9918 8 within concentration range c = 20–80 ng.(mL) −1 . 2.4. Quantitative Functions between Mass Spectrometric Stochastic Dynamic Diffusion Parameters and Theoretical Total Intensity Variables with Respect to Experimental Parameter Collision Energy Since the main goal of the current paper is to advocate for a general innovative approach to quantifying analytes in complex environmental and biological matrixes, mass-spectrometrically via Equation (2), in this short subsection, we shall direct the reader’s attention to Equation (4), appearing valid for MS data on labetalol . (4) I T O T , q ¯ = 1 2 × A I q A D q × D S D ″ , t o t Equation (4) is derived from Equation (1) (see Equation (A6) in ). It connects the theoretical average intensity data on analyte MS ions obtained toward CE and the D″ SD of Equation (2). Statistical parameters A D q and A I q are functional amplitudes of the SineSqr function fitted with relation D″ SD q = f(CE) and <I> q = f(CE) of q th MS fragment ion. We look at new empirical proof of the validity of Equation (4). It has been found by examining PARA measurands . depict the relations of D″ SD q = f(CE) and <I> q = f(CE) of MS ions of PARA at m / z 60 and 64. The theoretical <I> theor data on ions with respect to CEs and A D q , A I q parameters correlate with the experimental <I> exp ones and show |r|= 0.9942 9 and 0.96501. Since, so far, we have considered only two cases of the application of Equation (4) for the latter purposes, we are unable, currently, to assess the apparent violation of its validity. 2.5. Theoretical Data 2.5.1. 3D Molecular Conformations and Electronic Structures of Analytes and Energetics The calculation of the D QC parameters of Equation (3) has been discussed . However, we need to discuss the correlation between the 3D molecular conformation of MS ions and their energetics, thus highlighting the advantages of Equation (3), consisting of significant sensitivity and selectivity and capable of distinguishing quantitatively among molecular structures, exhibiting subtle electronic effects . detail the static and MD DFT results from ions in GS and TS states. summarizes the atomic coordinates of fragmentation ions, allowing us to extract geometry parameters such as bond lengths and angles. The energy difference in the fragmentation species of surfactants such as ions 212 _a and 212 _b is ∆E TOT = |0.015| a.u. The difference in the energetics of molecular ion [M+H] + of PARA and d 3 -PARA is of the same magnitude order (∆E TOT = |0.01| a.u.). In these and many more cases of ions , there is provided ample proof favoring Equation (3) as a sensitive and selective tool, allowing us to distinguish among molecular species exhibiting comparable energetics. There are almost identical ∆E TOT ions of tautomers of PARA and d 3 -PARA . The examples of species provide us with real insights into the complexity of the electronic effects and dynamics of MS ions, which are unable to be tackled precisely when examining only free Gibbs energy data on the global minimum of PES . Despite the fact that there is ∆E TOT = |0.01| a.u. of ions 152 _a and 155 _a for the molecular cation of PARA and d 3 -PARA, the difference in the D QC parameter is ∆D QC = |3.371| . 2.5.2. Determination of Quantum Chemical Diffusion Data Details on the calculation tasks of the D QC parameters of Equation (3) can be found in . Methodologically, we use vibrational data on MS ions at GSs and TSs. Variations and changes in the energetics of species can be examined adequately via Born–Oppenheimer MD. summarizes the D QC parameters of the studied herein MS ions. 2.6. Correlative Data on Mass Spectrometry and Quantum Chemistry We return to the major question that we posed at the beginning of the study: How does Equation (2) serve as a tool to determine the 3D molecular and electronic structures of analytes mass-spectrometrically, even examining multicomponent environmental and biological samples, having complex sample matrix effects? In the latter response, we shall focus on the chemometrics of relation D″ SD = f(D QC ). In line with our previous studies devoted to the same issue , achieving such a goal requires the assessment of the statistical significance of the mutual relationship between D QC and D″ SD data on ions belonging to one and the same molecular structure. shows the chemometric results from PARA at m / z 152, 158, 174, 301, and 325 depending on CE ( and ). There are |r| = 0.9979 8 at CE = 10 V and |r| = 1–0.9936 1 at CE = 25 V. Further, little contract might be observed in , depicting data on d 3 -PARA and propranolol . In the former case, there is obtained |r| = 0.9993 1 . Relation D″ SD = f( D QC ) of PRO ions at m / z 260, 157, and 116 shows |r| = 0.9916 1 . 2.1.1. Mass Spectrometric Fragmentation Reactions of Paracetamol and Its d 3 -Derivative Paracetamol shows fragmentation path CID ( m / z 152)→152, (134,) 110 (see chemical diagrams of species in ). Its dimer [2M+H] + shows fragmentation path CID ( m / z 303)→303, 152 . The ammonium adduct [2M+NH 4 ] + exhibits a low-abundance peak at m / z 320 . MS analysis of PARA radical-cations has been reported . Fragmentation processes, involving a radical-cation mechanism of bond cleavage, have been proposed . PARA tends to stabilize not only the Cu 2+ adduct, but also adducts of alkali metal ions and NH 4 + cation. There are species of type [M+NH 4 ] + ( m / z 169), [M+Na] + ( m / z 174), [M+K] + ( m / z 190), [2M+NH 4 ] + ( m / z 320), [2M+Na] + ( m / z 325), and [2M+K] + ( m / z 341), respectively . As reveal, the abundance of peaks depends on the applied voltage, presence of formic acid, and analyte concentration. The same is true for the peak of protonated analyte [M+H] + and its major fragmentation product of N–C bond cleavage, [M-CH 3 CHC=O] + , at m / z 152 and 110. The data on d 3 -PARA show similar fragmentation patterns together with some adducts. The peak at m / z 331 belongs to [2.d 3 -M+Na] + . There is an observed fragmentation reaction CID ( m / z 152)→152, 110, 93 . MS spectra of PARA in a sludge cake and biota show competitive fragmentation mechanisms causing not only charged cations, but also cation radicals. 2.1.2. Mass Spectrometric Fragmentation Reactions of Biocides Surfactants BAC-C12, BAC-C14, BAC-C16, and BAC-C18 exhibit molecular cation [M] + . The major fragmentation path shows the loss of 92 Da of toluene . Regarding the MS spectra of PARA and its d 3 -PARA derivative , depending on the experimental conditions, there are competitive fragmentation reactions causing more than one conformational and tautomeric form of product ions . Although the major fragmentation MS path of these surfactants is associated with the loss of the hydrophilic head of the compounds, the product ion consisting of a charged hydrophobic tail exhibits a complex conformational preference and electronic effects . Therefore, after examining only experimental measurands in CID-MS/MS and SRM operation modes, a lack of assignment of observable peaks to ions remains. Consider the shape of the SRM spectrum of BAC-C12 and the proposed chemical 2D diagrams and electronic structures of ions at m / z 212 and 213. 2.1.3. Mass Spectrometric Fragmentation Reactions of β-Blockers Propranolol shows the [M+H] + cation at m / z 260 . The fragmentation paths depend on CE, pH, etc. . With increasing CE, there is a low-abundance ion at m / z 183, due to the cleavage of the [C 3 H 9 N] 0 fragment and solvent water . The peak at m / z 282 of the MS spectrum of PRO at CE = 30V is assigned to the [M+Na] + adduct. The same is true for ATE MS reactions . Their identical structural (2,3-dihydroxy-propyl)-isopropyl-ammonium fragment causes peaks at m / z 145, 105, 101, 83, and 64, respectively. ATE exhibits the [M+H] + cation at m / z 267. Quantitative analysis was carried out, examining the [M+H] + cation at m / z 260 and 267 of PRO and ATE in the mixture. Employment of the SRM and SIM modes leads to pairs of MS peaks at 260/261 and 267/268 . Classical quantitative methods look at average m / z data on MS peaks at m / z 260.5 (SRM) and 262.07 (SIM) (PRO), as well as 267.52 (SRM) and 267.82 (SIM) (ATE). The matrix affects significantly not only the m / z data compared with the results from the fragmentation paths of standard samples, but also product ions . Owing to the fact that there is used only an MS peak of the [M+H] + cation in both the cases, excluding a statistically representative set of fragmentation species, there is particular importance in accounting for the molecular conformations and electronic effects of protonated antibiotics in order to assign statistically different sets of m / z data on two SRM and SIM spectra of standard samples of antibiotics and their mixtures in environmental and biological matrixes. For the purposes of 3D structural analysis and empirical demonstration of the assignment of the [M+H] + cation at m / z 260 of PRO in the complex sample matrix, there is used a statistically representative set of MS peaks at m / z 260, 283, 157, 116, and 98 found in the MS spectrum in soil . The same fragmentation species have been found in the MS spectrum of PRO in biological samples . Owing to the proposed two competitive mechanisms of formation of the MS ion at m / z 183, our study examines the correlation between MS data and theoretical quantum chemical ones looking at ions 183 _a and 183 _b . The MS ion at m / z 116 in the CID-MS/MS spectrum of PRO has been observed examining tandem MS/MS processes of ATE and studying the ions’ CID interaction of the [M+H] + cation at m / z 267 . The peak at m / z 116 has been used to determine PRO in the liver, brain, and kidney thin tissue, as well . In addition to the peak at m / z 116, ATE exhibits a set of ions depending on the experimental conditions . These are peaks at m / z 225, 208, 190, 173, 162, and 145 . Moreover, there are peaks at m / z 133, 115, and 107 . 3 -Derivative Paracetamol shows fragmentation path CID ( m / z 152)→152, (134,) 110 (see chemical diagrams of species in ). Its dimer [2M+H] + shows fragmentation path CID ( m / z 303)→303, 152 . The ammonium adduct [2M+NH 4 ] + exhibits a low-abundance peak at m / z 320 . MS analysis of PARA radical-cations has been reported . Fragmentation processes, involving a radical-cation mechanism of bond cleavage, have been proposed . PARA tends to stabilize not only the Cu 2+ adduct, but also adducts of alkali metal ions and NH 4 + cation. There are species of type [M+NH 4 ] + ( m / z 169), [M+Na] + ( m / z 174), [M+K] + ( m / z 190), [2M+NH 4 ] + ( m / z 320), [2M+Na] + ( m / z 325), and [2M+K] + ( m / z 341), respectively . As reveal, the abundance of peaks depends on the applied voltage, presence of formic acid, and analyte concentration. The same is true for the peak of protonated analyte [M+H] + and its major fragmentation product of N–C bond cleavage, [M-CH 3 CHC=O] + , at m / z 152 and 110. The data on d 3 -PARA show similar fragmentation patterns together with some adducts. The peak at m / z 331 belongs to [2.d 3 -M+Na] + . There is an observed fragmentation reaction CID ( m / z 152)→152, 110, 93 . MS spectra of PARA in a sludge cake and biota show competitive fragmentation mechanisms causing not only charged cations, but also cation radicals. Surfactants BAC-C12, BAC-C14, BAC-C16, and BAC-C18 exhibit molecular cation [M] + . The major fragmentation path shows the loss of 92 Da of toluene . Regarding the MS spectra of PARA and its d 3 -PARA derivative , depending on the experimental conditions, there are competitive fragmentation reactions causing more than one conformational and tautomeric form of product ions . Although the major fragmentation MS path of these surfactants is associated with the loss of the hydrophilic head of the compounds, the product ion consisting of a charged hydrophobic tail exhibits a complex conformational preference and electronic effects . Therefore, after examining only experimental measurands in CID-MS/MS and SRM operation modes, a lack of assignment of observable peaks to ions remains. Consider the shape of the SRM spectrum of BAC-C12 and the proposed chemical 2D diagrams and electronic structures of ions at m / z 212 and 213. Propranolol shows the [M+H] + cation at m / z 260 . The fragmentation paths depend on CE, pH, etc. . With increasing CE, there is a low-abundance ion at m / z 183, due to the cleavage of the [C 3 H 9 N] 0 fragment and solvent water . The peak at m / z 282 of the MS spectrum of PRO at CE = 30V is assigned to the [M+Na] + adduct. The same is true for ATE MS reactions . Their identical structural (2,3-dihydroxy-propyl)-isopropyl-ammonium fragment causes peaks at m / z 145, 105, 101, 83, and 64, respectively. ATE exhibits the [M+H] + cation at m / z 267. Quantitative analysis was carried out, examining the [M+H] + cation at m / z 260 and 267 of PRO and ATE in the mixture. Employment of the SRM and SIM modes leads to pairs of MS peaks at 260/261 and 267/268 . Classical quantitative methods look at average m / z data on MS peaks at m / z 260.5 (SRM) and 262.07 (SIM) (PRO), as well as 267.52 (SRM) and 267.82 (SIM) (ATE). The matrix affects significantly not only the m / z data compared with the results from the fragmentation paths of standard samples, but also product ions . Owing to the fact that there is used only an MS peak of the [M+H] + cation in both the cases, excluding a statistically representative set of fragmentation species, there is particular importance in accounting for the molecular conformations and electronic effects of protonated antibiotics in order to assign statistically different sets of m / z data on two SRM and SIM spectra of standard samples of antibiotics and their mixtures in environmental and biological matrixes. For the purposes of 3D structural analysis and empirical demonstration of the assignment of the [M+H] + cation at m / z 260 of PRO in the complex sample matrix, there is used a statistically representative set of MS peaks at m / z 260, 283, 157, 116, and 98 found in the MS spectrum in soil . The same fragmentation species have been found in the MS spectrum of PRO in biological samples . Owing to the proposed two competitive mechanisms of formation of the MS ion at m / z 183, our study examines the correlation between MS data and theoretical quantum chemical ones looking at ions 183 _a and 183 _b . The MS ion at m / z 116 in the CID-MS/MS spectrum of PRO has been observed examining tandem MS/MS processes of ATE and studying the ions’ CID interaction of the [M+H] + cation at m / z 267 . The peak at m / z 116 has been used to determine PRO in the liver, brain, and kidney thin tissue, as well . In addition to the peak at m / z 116, ATE exhibits a set of ions depending on the experimental conditions . These are peaks at m / z 225, 208, 190, 173, 162, and 145 . Moreover, there are peaks at m / z 133, 115, and 107 . The capability of Equation (1) in determining the 3D molecular structures of analytes, when it is used complementarily with Equation (3), has already been reviewed . In this light, herein, we prove its validity and compatibility with Equation (2). In verifying empirically the validity of Equation (1), we can explore the results from SRM data on the MS ion at m / z 110 of PARA of its MS/MS spectra of the [M+H] + cation at m / z 152 . The new data on variables of PARA show lnP1 = 17.0532. Therefore, Equation (1) shows that the MS law is valid for the temporal distribution of measurands of PARA, as well. Details of the statistical parameters A i are presented in . Calculation tasks have been discussed previously . The latter figure illustrates the relation between the D ′ SD and D″ SD parameters, showing |r| = 0.9995 3 . The deviation from |r| = 1 is a result of the error contribution of the data processing of the temporal distribution of intensity with respect to the scan time or function (I–<I>) 2 = f(t) fitted to the SineSqr function, thus producing statistical parameter A i . We shall support our method by highlighting how the D″ SD parameters are determined per span of scan time. We shall justify the view that exact relations are obtained when there are quantified fluctuations of measurands with a short span of scan time. The question that we need to address is “Which criteria determine a set of MS measurands with respect to a concrete span of scan time as the true one?”, or which methods are used in order to validate the parameters of Equation (2). In doing so, we use data on the selected reaction monitoring mode of the PARA [M+H] + ion at m / z 152 of segments of the MS method, where segment (i) has collected a full mass scan set of variables. There are examined segment raw data QC_High_SRM_SEG_CE40_i.raw (i = 1–3) . lists the output, only, of the fragmentation ion at m / z 110 for the SRM operation mode, while list results from the CID-MS/MS spectra of PARA of its [M+H] + cation at m / z 152 depending on experimental conditions such as CE and the presence of formic acid. Chemometrics of the normality Shapiro–Wilk test together with ANOVA data in are summarized in . Data on quality control standard samples of a mixture of antibiotics QC_H_SRM_CE40_3 (segment 3, ) reveal three groups of m / z parameters that are mutually significantly different from the perspective of chemometrics . These are a subset of variables of segment 3, shown as QC_High_SRM_SEG_CE40_3_1, QC_High_SRM_SEG_CE40_3_2, and QC_High_SRM_SEG_CE40_3_3. The same is true for the recorded two sets of m / z variables of the same ion of segment (2) (QC_High_SRM_SEG_CE40_2_1 and QC_High_SRM_SEG_CE40_2_2). The chemometric analysis of datasets of measurands at m / z 110.1, i.e., QC_High_SRM_SEG_CE40_3_3, QC_High_SRM_SEG_CE40_2_1, and QC_High_SRM_SEG_CE40_1, is statistically equal. In other words, correlative analysis and determination of the D″ SD parameters of Equation (2) within the framework of three segments of MS spectra is carried out using those sets of measurands that are statistically significantly equal, or those with values at m / z 110.065. Therefore, there are distinguished quantitatively three sets of peaks at m / z 110: 110.84 49 ± 0.05985, 110.06 525 ± 0.04709, and 110.23885 ± 0.04898. The number of the subset of variables is not extensive when looking at whole datasets of average values over the whole period of measurement. Results from sector (3) (QC_High_SRM_SEG_CE40_3) show a value at m / z 110.0673. In other words, our approach does not ignore measurable sets of low-abundance variables and their fluctuations. The examples could be multiplied, in fact, without limit when looking at the dataset of measurands in environmental and biological samples of antibiotics and biocides. The results provide compelling evidence for the advantages of our method. The analysis of the effluent, treated sludge cake, and biota show that PARA and its d 3 -derivative in the presence of antibiotics or biocides reveal peaks at a range of m / z 107–115, which further complicate not only the qualitative assignment of products, but also the quantification of those single analytes in mixtures, as well as the deuterium exchange processes of d 3 -PARA, if any. Quantitative data on biocides in biota show r 2 = 0.9309–0.985 using the classical approach to quantify the average total intensity of MS peaks over the whole period of measurements . The results were data-processed via the ICIS algorithm of peak detection. The Savitzky–Golay smoothing function with baseline correction was used. The ICIS algorithm involves a trapezoidal integration approach . In order to account for the effect of the smoothing function on chemometrics, herein, we examined the same relationships, but applying baseline correction and TIA ( and ). illustrates relationships among the concentrations of BAC-C12 and BAC-C14 in biota showing |r|= 0.9874 5 and 0.9922 3 . Despite this, the ICSI or TIA methods show low |r| parameters and high sd(yEr±) for mean values. The linear equation obtained via the direct application of TIA of BAC-C14 is y = −16.1 3779 ± 14.6 133 + 6.5 4102 ± 0.3 3481 .x. The error contribution of the data-processing algorithm affects the main value of the intercept and slope of linear regression, respectively, and the correlation equations. The reliability of quantitative data is complicated when examining biota. There are competitive fragmentation reactions producing both mono-cations and cation radicals. The CID-MS spectra of PARA of freeze-dried biota show peaks at m / z 108.07 and 109.07. The wet sample shows MS ions at m / z 107.07, 110.07, and 111.1 ( , and and ). The mixtures of biocides show pairs of ions at m / z 211 and 212 (BAC-C12) and 239 and 240 (BAC-C14), instead of a single ion according to the common fragmentation scheme. The data processing of isotopologies in biota via the latter algorithms yields high sd(yEr±) values . Either by the ICIS or TIA algorithm, with or without baseline correction, the uncertainty of the analytical results is increased. The task can be completed precisely by Equation (2). ICIS and TIA are incapable of providing reliable analytical data on environmental and biological samples, particularly when there are tautomers and fragmentation reactions involving different molecular-level mechanisms, leading not only to cations, but also to cation radicals. Moreover, it is true that ca. 25% of pharmaceutics exist in more than one tautomeric form, in addition to the fact that almost all antibiotics are characterized by multiple ionizable protonation positions in their molecules . Reference concerns the same problem of quantifying LMW antibiotics in biological fluid. Thus, the following paragraph focuses on the quantitative analysis of biocides using Equation (2). We shall describe the advantages of Formula (2) compared with the results from the ICIS or TIA methods presented so far. summarizes m / z data on BAC-C12 and BAC-C14 in biota at concentrations c = 2–80 ng.mL –1 . ANOVA and t -tests show two sets of m / z values at 212.2 2211 ± 0.12988 and m / z 211.6 at c = 2 ng.mL −1 (BAC-C12), which are statistically significantly different. At c = 6 ng.mL −1 are distinguished three sets of measurands at m / z 213, 212, and 211.5 . There are three elemental compositions, molecular conformations, and electronic structures of BAC-C12 species (212.2 1401 ± 0.1001, 212.5 4356 ± 0.12232, and 211.7 86 ± 0.15203.) Datasets at m / z 212.22 211 ± 0.12988 and 212.2 1401 ± 0.1001 of BAC-C12 ions at c = 2 and 6 ng.mL −1 are statistically not significantly different. The ion at m / z 212.2 belongs to one and the same ion at two concentrations. Further, we shall come to see that Equation (2) accounts precisely for the fluctuations in the m / z and intensity data on MS peaks, thus producing excellent-to-exact quantification and 3D structural analysis, despite the complexity of the isotope shape. and and show that the quantification of biocide BAC-C12 in biota employing the D″ SD parameter and assessing the relationship ln[ D″ SD ] = f(conc.) yields |r| = 0.9999 1 –0.9905 8 , examining c = 2–80 ng.(mL) −1 . Conversely, the ICIS and TIA algorithms produce |r|= 0.98924 . There has been obtained |r| = 0.999 when studying the same set of analytes in sludge . Data on PARA and d 3 -PARA c = 5–400 ng.(mL) –1 show r 2 = 0.997. Quantification of BAC-C12 and BAC-C14 yields r 2 = 0.987 and 0.983 . The analysis of the peak at m / z 211.75 ± 0.15 of BAC-C12 ions in biota using the [M+H] + cation at m / z 304 ( and ) produces |r| = 0.9972 1 and 0.9841 1 when employing the equation D″ SD = f(conc.) Again, there is improved method performance. The analysis of BAC-C14 yields |r| = 0.9918 8 within concentration range c = 20–80 ng.(mL) −1 . Since the main goal of the current paper is to advocate for a general innovative approach to quantifying analytes in complex environmental and biological matrixes, mass-spectrometrically via Equation (2), in this short subsection, we shall direct the reader’s attention to Equation (4), appearing valid for MS data on labetalol . (4) I T O T , q ¯ = 1 2 × A I q A D q × D S D ″ , t o t Equation (4) is derived from Equation (1) (see Equation (A6) in ). It connects the theoretical average intensity data on analyte MS ions obtained toward CE and the D″ SD of Equation (2). Statistical parameters A D q and A I q are functional amplitudes of the SineSqr function fitted with relation D″ SD q = f(CE) and <I> q = f(CE) of q th MS fragment ion. We look at new empirical proof of the validity of Equation (4). It has been found by examining PARA measurands . depict the relations of D″ SD q = f(CE) and <I> q = f(CE) of MS ions of PARA at m / z 60 and 64. The theoretical <I> theor data on ions with respect to CEs and A D q , A I q parameters correlate with the experimental <I> exp ones and show |r|= 0.9942 9 and 0.96501. Since, so far, we have considered only two cases of the application of Equation (4) for the latter purposes, we are unable, currently, to assess the apparent violation of its validity. 2.5.1. 3D Molecular Conformations and Electronic Structures of Analytes and Energetics The calculation of the D QC parameters of Equation (3) has been discussed . However, we need to discuss the correlation between the 3D molecular conformation of MS ions and their energetics, thus highlighting the advantages of Equation (3), consisting of significant sensitivity and selectivity and capable of distinguishing quantitatively among molecular structures, exhibiting subtle electronic effects . detail the static and MD DFT results from ions in GS and TS states. summarizes the atomic coordinates of fragmentation ions, allowing us to extract geometry parameters such as bond lengths and angles. The energy difference in the fragmentation species of surfactants such as ions 212 _a and 212 _b is ∆E TOT = |0.015| a.u. The difference in the energetics of molecular ion [M+H] + of PARA and d 3 -PARA is of the same magnitude order (∆E TOT = |0.01| a.u.). In these and many more cases of ions , there is provided ample proof favoring Equation (3) as a sensitive and selective tool, allowing us to distinguish among molecular species exhibiting comparable energetics. There are almost identical ∆E TOT ions of tautomers of PARA and d 3 -PARA . The examples of species provide us with real insights into the complexity of the electronic effects and dynamics of MS ions, which are unable to be tackled precisely when examining only free Gibbs energy data on the global minimum of PES . Despite the fact that there is ∆E TOT = |0.01| a.u. of ions 152 _a and 155 _a for the molecular cation of PARA and d 3 -PARA, the difference in the D QC parameter is ∆D QC = |3.371| . 2.5.2. Determination of Quantum Chemical Diffusion Data Details on the calculation tasks of the D QC parameters of Equation (3) can be found in . Methodologically, we use vibrational data on MS ions at GSs and TSs. Variations and changes in the energetics of species can be examined adequately via Born–Oppenheimer MD. summarizes the D QC parameters of the studied herein MS ions. The calculation of the D QC parameters of Equation (3) has been discussed . However, we need to discuss the correlation between the 3D molecular conformation of MS ions and their energetics, thus highlighting the advantages of Equation (3), consisting of significant sensitivity and selectivity and capable of distinguishing quantitatively among molecular structures, exhibiting subtle electronic effects . detail the static and MD DFT results from ions in GS and TS states. summarizes the atomic coordinates of fragmentation ions, allowing us to extract geometry parameters such as bond lengths and angles. The energy difference in the fragmentation species of surfactants such as ions 212 _a and 212 _b is ∆E TOT = |0.015| a.u. The difference in the energetics of molecular ion [M+H] + of PARA and d 3 -PARA is of the same magnitude order (∆E TOT = |0.01| a.u.). In these and many more cases of ions , there is provided ample proof favoring Equation (3) as a sensitive and selective tool, allowing us to distinguish among molecular species exhibiting comparable energetics. There are almost identical ∆E TOT ions of tautomers of PARA and d 3 -PARA . The examples of species provide us with real insights into the complexity of the electronic effects and dynamics of MS ions, which are unable to be tackled precisely when examining only free Gibbs energy data on the global minimum of PES . Despite the fact that there is ∆E TOT = |0.01| a.u. of ions 152 _a and 155 _a for the molecular cation of PARA and d 3 -PARA, the difference in the D QC parameter is ∆D QC = |3.371| . Details on the calculation tasks of the D QC parameters of Equation (3) can be found in . Methodologically, we use vibrational data on MS ions at GSs and TSs. Variations and changes in the energetics of species can be examined adequately via Born–Oppenheimer MD. summarizes the D QC parameters of the studied herein MS ions. We return to the major question that we posed at the beginning of the study: How does Equation (2) serve as a tool to determine the 3D molecular and electronic structures of analytes mass-spectrometrically, even examining multicomponent environmental and biological samples, having complex sample matrix effects? In the latter response, we shall focus on the chemometrics of relation D″ SD = f(D QC ). In line with our previous studies devoted to the same issue , achieving such a goal requires the assessment of the statistical significance of the mutual relationship between D QC and D″ SD data on ions belonging to one and the same molecular structure. shows the chemometric results from PARA at m / z 152, 158, 174, 301, and 325 depending on CE ( and ). There are |r| = 0.9979 8 at CE = 10 V and |r| = 1–0.9936 1 at CE = 25 V. Further, little contract might be observed in , depicting data on d 3 -PARA and propranolol . In the former case, there is obtained |r| = 0.9993 1 . Relation D″ SD = f( D QC ) of PRO ions at m / z 260, 157, and 116 shows |r| = 0.9916 1 . Since the purpose of the study is to gain insights into quantitative functionalities among MS measurands of analytes’ molecular and fragmentation peaks, the physico-chemical properties and parameters of molecular and ionic species, and their 3D molecular and electronic structures, as well as experimental factors and parameters of measurements, this section might be regarded as room for debate, for which we suggest that the discussion helps the reader to understand whether Equations (1) and (2) are capable of providing not only the exact quantification of analytes in complex biological and environmental matrixes, but also the simultaneous 3D structural determination of the same compounds and samples. However, before embarking on a discussion of the advantages of Equations (1) and (2), we provide a few remarks on the data reported so far. To begin with, methodological contributions devoted to developing quantitative methods for the analysis of datasets of measurands and those devoted to elaborating methods for 3D structural MS analysis are not equally frequent. Therefore, developed methods for simultaneous quantitative and 3D structural analyses, and approaches capable of providing the exact determination of the amounts and structural parameters of molecules, are restricted. We draw the reader’s attention to the fact that Equation (2) is one of the scarce examples of formulas used for both quantitative and 3D structural analyses. However, the latter statements lead us to a logical question: Why should we be forced to become aware of details of analytes’ 3D molecular structures, owing to the fact that quantitative analytical mass spectrometry represents different areas of structural mass spectrometry? Routinely, we process MS-based quantification as a separate research task. It is well known that with such distinctions in research tasks, we are completely able to characterize and quantify analytes mass-spectrometrically. This combined set of research tasks would seem to complicate the further experimental design of the MS analysis of environmental and biological samples. An answer to such a question, if any, would be that, since stochastic dynamic model Equation (2) is a novel analytical MS law, it would be best to provide, herein, an immediate illustration of the crucial importance of the capability of Formula (2) in quantifying and determining the 3D structures of analytes via MS for the purpose of the quantitative analysis of complex environmental and biological samples. Our earlier and most recent outcome of the application of Equation (2) to determine exactly LMW analytes in biological fluids —which, however, largely matches the results from the current study—perhaps best illustrates the advantages of our stochastic dynamic theorization of MS phenomena via Equation (2) over classical quantitative approaches. For instance, these include the ICIS or TIA algorithms, dealing with the integration of the area of the MS shape of analyte fragment peaks as a continuous function of the m / z values with respect to MS intensity, instead of as discrete random variables and their fluctuations with a short span of scan time, as according to Formulas (1) and (2). As the MS analysis of metronidazole in clinical human urine has demonstrated , analyte molecular ion [M+H] + is characterized by a set of statistically significantly different m / z variables depending on the experimental conditions, but explicitly highlighting low analyte concentrations at a range of 2.5 to 25,000 ng.(mL) −1 . It has been found that there are observed mass-spectrometrically two datasets of measurands at m / z 172.071 8 and 172.040 81 of the [M+H] + cation. As can be expected, the employment of classical quantitative approaches to determine the analyte concentration yields |r| = 0.9939 5 –0.9940 4 using linear calibration equation I TOT = f(conc.), where I TOT is determined via the ICIS or TIA algorithms. The decrease in method performance has been explained with the fact that, on the one hand, there are quantified isotope shapes of two different m / z quantities, which even can belong to two different analytes in complex biological samples, when there are determined unknown compounds. On the other hand, the error contribution to the mathematical data processing of MS patterns by means of the ICIS or TIA algorithms is significant, due to the large sd(yEr±) values of integration approaches. Due to these reasons, we suggest that the reliable and exact quantitative analysis of such complicated cases of fluctuations of MS measurands at very low analyte concentrations and complex matrix effects can be carried out exactly, accurately, precisely, selectively, and sensitively only via Equation (2) and the simultaneous quantitative and 3D structural analysis of analytes with respect to the experimental conditions of measurements. These combined research tasks allow us to assign exactly statistically different sets of measurable variables to corresponding molecular conformations and electronic structures of analytes. It has been found that the aforementioned MS peaks of metronidazole at m / z 172.071 8 and 172.040 81 belong to its two different tautomeric forms. The performed quantitative analysis based on two different calibration statistical equations D″ SD = f(conc.) of two fragmentation peaks has resulted in |r| = 1. Turning to the results from this study in quantifying biocides BAC-C12 and BAC-C14 in biota, it can be easily shown an analogous cases of the temporal distribution and variations of MS measurands of these analytes in complex matrix samples depending on the experimental conditions, particularly highlighting a low analyte concentration as a major factor causing the observation of sets of statistically different m / z measurable variables belonging to different molecular conformations and electronic structures of analyte fragmentation species. Due to these reasons, the employment of the ICIS or TIA algorithms, quantifying the isotope shape area of function m / z = f(I), yields |r| = 0.9304–0.9856. Conversely, as the results from our analysis using Equation (2) show, there are statistically significant sets of variables of fragmentation ions at m / z 212 of CAB-C12 obtained as a result of the SRM tandem fragmentation mode of the molecular cation of analyte [M+H] + at m / z 304 . The statistical linear calibration models ln D″ SD = f(conc.) and D″ SD = f(conc.) have resulted in exact method performance, showing |r| = 0.9999 1 –0.9905 9 and 0.9972 1 . There are examined MS peaks of BAC-C12 at m / z 212.209 ± 0.1 and 211.75 ± 0.15. Thus, again, there is observed a highly reliable and very prominent quantitative analysis when we use Equation (2) instead of classical quantitative MS approaches based on the aforementioned algorithms. The new data presented in this paper on the MS quantitative analysis of mixtures of biocides and antibiotics in biota and sewage sludge show clearly the capability of the exact and reliable processing of complex isotope shapes of MS measurands, obtained as a result of competitive processes of tautomers and mechanisms involving the formation of cations and cation radicals, which is not only not universal, but also is beyond the capability of classical quantitative methods for the data processing of MS measurands. It is reasonable to assume, therefore, that classical automated algorithms of data processing of observable variables of such complex MS patterns are of little use when dealing quantitatively with the analysis of environmental and biological multicomponent samples of unknown analytes, very low analyte concentrations, and sample matrix effects. Furthermore, owing to the fact that the determination of analytes within the framework of the stochastic dynamic theory and model Equation (2) is carried out without the presence of IS, it is obvious that the innovative method allows researchers to determine quantitatively and structurally by mass spectrometry any unknown analyte in a complex mixture whose measurable parameters do not fit exactly with the available ISs or there is a lack of suitable internal standards. The latter remark is associated with the fact that both quantitative and 2D structural analytical methods for mass spectrometry, so far, use mainly ISs. Thus, under the so-called confirmed structure, there is understood currently (a) a reported exact mass; (b) an unequivocally determined molecular formula, and (c) a single confirmed structure, which is obtained by means of IS. However, often, environmental and biological samples contain analytes lacking suitable ISs. Therefore, even 2D structural MS analysis produces a so-called possible structure or tentative candidates of a 2D chemical diagram. On the other hand, we should distinguish between so-called 2D chemical diagrams and 3D molecular structures as well. The 2D diagrams are obtained according to the rule of the degree of unsaturation, in addition to concepts of atomic valence and oxidation states. We note the following statements: (a) “…the sum of the valences of all the bonds formed by an ion is equal to the valence of the ion”, and (b) “…the stoichiometry must be obeyed by electro neutrality principle” . However, 2D structures or 2D diagrams do not tell us anything about the chemical reactivity and the chemistry of the molecules. Why? The term “molecular structure” means a generic property determined by an ensemble of atoms in a molecule . However, analytical statements, claiming 3D molecular structures, should be based on electronic structural analysis, which is reliable only when there is information about the electron density maps of the ensemble of atoms in the molecules. The electron density maps are proof of the probability density distribution, which is observable experimentally. These maps determine the probability of determining electrons at infinitely small volumes and positions in the 3D space . The so-called total energy is determined on the basis of the electron density maps. Therefore, any 3D molecular model or 3D molecular structure is characterized by a unique to total energy quantity. In other words, from the perspective of structural chemistry, under a 3D molecular structure, there is understood a 3D molecular conformation and corresponding electronic structures, which are unique as a whole. There is a lack of corresponding disordered structural fragments. Of course, an objection to the latter statements could be made by arguing that this study deals with the complicated case of the molecular structures of biocides showing a set of 3D molecular conformations, thus leading to a significant variation in m / z measurands and corresponding fluctuations in the observable m / z and intensity parameters of fragmentation ions. However, a part of the answer to such a question, if any, lies in the fact that the statements outlined above are obvious looking at the results from work and those reported herein of the analysis of biocides in biota and sewage sludge. Perhaps the most astonishing empirical evidence for the latter statements has been provided looking at the results from this study, analyzing the temporal distribution of measurable variables of standard PARA and its d 3 -derivative, particularly examining the fragmentation reactions of MS molecular ions [M+H] + and [d 3 -M+H] + at m / z 152, 155, 158, 174, 301, and 325, yielding the exact coefficient of linear correlation between D QC and D″ SD data on CE = 25 V . 4.1. Chemicals and analytical instrumentation Paracetamol (acetaminophen, 4′-hydroxyacetanilide, N-(4-hydroxy-phenyl)-acetamide), atenolol (2-[4-(2-hydroxy-3-isopropylamino-propoxy)-phenyl]-acetamide), propranolol (1-isopropylamino-3-(naphthalen-1-yloxy)-propan-2-ol), benzyl-dodecyl-dimethyl-ammonium chloride (BAC-C12), benzyl-dimethyl-tetradecyl-ammonium chloride (BAC-C14), benzyl-hexadecyl-dimethyl-ammonium chloride (BAC-C16), and benzyl-dimethyl-octadecyl-ammonium chloride (BAC-C18) were Sigma Aldrich products. Thermo Finnigan LC (Massachusetts, USA) instrumentation equipped with a Micro AS autosampler and MSPump Plus was used. LC columns, namely the Waters Xbridge C18 column (Milford, USA; 1.0 × 100 mm ID, 3.5 μm), Waters Xselect charged surface hybrid C18 column (2.1 × 150 mm ID, 3.5 μm), Waters Xselect high-strength silica T3 column (1.0 × 100 mm ID, 3.5 μm), and a Phenomenex KrudKatcher Ultra 0.5 micron in-line filter, were used . Experimental conditions of MS measurements are listed in . The study used the MS database on MS measurements available in . 4.2. Sample Preparation Methods, Samples, and Solutions See details in . 4.3. Theory/Computations The GAUSSIAN 98, 09; Dalton2011, and Gamess-US program packages were employed. Ab initio and DFT molecular optimization was carried out by means of B3LYP, B3PW91, and ωB97X-D methods. Truhlar’s functional M06-2X was used . The algorithm by Bernys was used to determine GSs. The stationary points at PES were obtained by harmonic vibrational analysis. The basis set cc-pVDZ by Dunning and 6-31++G(2d,2p) and quasirelativistic effective core pseudo-potentials from Stuttgart–Dresden(–Bonn) were used. MD computations were performed by ab initio BOMD, which was carried out with the M062X functional and SDD or cc-pvDZ basis sets, as well as without considering the periodic boundary condition. Allinger’s MM2 force field was utilized . The low-order torsion terms were accounted for with higher priority than van der Waals interactions. The accuracy of the method compared with experiments was 1.5 kJ.mol −1 of diamante or 5.71.10 −4 a.u. 4.4. Chemometrics The software R4Cal 4.1.14 Open Office STATISTICs for Windows 7 was used. The statistical significance was checked by a t -test. The model fit was determined by an F-test. ANOVA was also used . The ProteoWizard 3.0.11565.0 (2017), mMass 5.0.0, QuanBrowser 2.0.7 (Thermo Fischer Scientific Inc. Massachusetts, USA), and AMDIS 2.71 (2012) software were utilized. Paracetamol (acetaminophen, 4′-hydroxyacetanilide, N-(4-hydroxy-phenyl)-acetamide), atenolol (2-[4-(2-hydroxy-3-isopropylamino-propoxy)-phenyl]-acetamide), propranolol (1-isopropylamino-3-(naphthalen-1-yloxy)-propan-2-ol), benzyl-dodecyl-dimethyl-ammonium chloride (BAC-C12), benzyl-dimethyl-tetradecyl-ammonium chloride (BAC-C14), benzyl-hexadecyl-dimethyl-ammonium chloride (BAC-C16), and benzyl-dimethyl-octadecyl-ammonium chloride (BAC-C18) were Sigma Aldrich products. Thermo Finnigan LC (Massachusetts, USA) instrumentation equipped with a Micro AS autosampler and MSPump Plus was used. LC columns, namely the Waters Xbridge C18 column (Milford, USA; 1.0 × 100 mm ID, 3.5 μm), Waters Xselect charged surface hybrid C18 column (2.1 × 150 mm ID, 3.5 μm), Waters Xselect high-strength silica T3 column (1.0 × 100 mm ID, 3.5 μm), and a Phenomenex KrudKatcher Ultra 0.5 micron in-line filter, were used . Experimental conditions of MS measurements are listed in . The study used the MS database on MS measurements available in . See details in . The GAUSSIAN 98, 09; Dalton2011, and Gamess-US program packages were employed. Ab initio and DFT molecular optimization was carried out by means of B3LYP, B3PW91, and ωB97X-D methods. Truhlar’s functional M06-2X was used . The algorithm by Bernys was used to determine GSs. The stationary points at PES were obtained by harmonic vibrational analysis. The basis set cc-pVDZ by Dunning and 6-31++G(2d,2p) and quasirelativistic effective core pseudo-potentials from Stuttgart–Dresden(–Bonn) were used. MD computations were performed by ab initio BOMD, which was carried out with the M062X functional and SDD or cc-pvDZ basis sets, as well as without considering the periodic boundary condition. Allinger’s MM2 force field was utilized . The low-order torsion terms were accounted for with higher priority than van der Waals interactions. The accuracy of the method compared with experiments was 1.5 kJ.mol −1 of diamante or 5.71.10 −4 a.u. The software R4Cal 4.1.14 Open Office STATISTICs for Windows 7 was used. The statistical significance was checked by a t -test. The model fit was determined by an F-test. ANOVA was also used . The ProteoWizard 3.0.11565.0 (2017), mMass 5.0.0, QuanBrowser 2.0.7 (Thermo Fischer Scientific Inc. Massachusetts, USA), and AMDIS 2.71 (2012) software were utilized. Results from the study provide empirical evidence for the following conclusions. (A) In testing the capability of Equation (2) [ D S D ″ , t o t = ∑ i n D S D ″ , i = ∑ i n 2.6388.10 − 17 × I i 2 ¯ − I i ¯ 2 ] to quantify the MS intensity of analyte ions with a short span of scan time, we contrasted the use of classical quantitative methods based on ICIS and trapezoidal integration algorithms of peak detection. The analysis of surfactants in biota via equation ln[D″ SD ] = f(conc.) yields |r| = 0.9999 1 examining the peaks of BAC-C12 at m/z 212.209 ± 0.1 and 211.75 ± 0.15. (B) Equation (4) [ I T O T , q ¯ = 1 2 × A I q A D q × D S D ″ , t o t ] has been proven for PARA ions. The relation between <I> exp and <I> theor shows|r|= 0.9942 9 and 0.96501. (C) Parameter |r| = 1 has been obtained, determining the 3D molecular structures of PARA and its ions at m/z 152, 158, 174, 301, and 325 via the assessment of relation D″ SD = f(D QC ) in biota at CE = 25 V.
|
Application of Ozonation-Biodegradation Hybrid System for Polycyclic Aromatic Hydrocarbons Degradation
|
8c446ade-9bc9-4c31-877e-08466b042ea7
|
10094057
|
Microbiology[mh]
|
Increasing industrialization and negative anthropogenic activities have caused severe environmental pollution worldwide . Although efforts to reduce emissions of toxic organic compounds into the environment have been ongoing for decades, their negative impact on the environment and on societies remains significant. In many cases these compounds self-degrade very slowly in the environment so that their concentrations in contaminated sites remain high for a very long time . Polycyclic aromatic hydrocarbons (PAHs), as widespread environmental pollutants, are generally characterized by high melting and boiling points, low solubility, and low vapor pressure , and have carcinogenic, mutagenic, teratogenic, and estrogenic properties and can pose a significant threat to human health . PAHs are formed as a product of incomplete combustion in various combustion sources (coal, oil, wood, and automobile emissions) [ , , ]. Due to their high lipophilicity and stability, PAHs accumulate in the fatty tissues of fish after food ingestion or through sorption by the gills and skin . Creosote, a mixture of hydrocarbons, in particular PAHs, has been used for many decades in many countries around the world to preserve wooden products, including fences, posts, masts, farm buildings, etc. . This makes the sources of creosote contamination widespread. The use of these treated materials leads to the progressive release of creosote into the soil, surface water, and wastewater. The most common biological methods for the treatment of PAHs in water are phytoremediation and microbial bioremediation . For sustainable environmental cleanup, bioremediation is widely preferred because it is considered to be an effective, economical, and environmentally friendly method. Microorganisms used in bioremediation detoxify (by degradation, mineralization, and accumulation) many harmful and biodegradable pollutants, which they convert into less harmful forms . The activity of microorganisms is influenced by their species, their genes, and the conditions of the study. Due to the harmful effects of PAHs on some microbes, microorganisms must adapt to the prevailing conditions and form new microbial communities. Strains selected from contaminated areas show a high capacity to degrade PAHs . They can be multiplied and transferred to other contaminated areas. In addition, the DNA of such microorganisms contains resistance or degradation genes that can be isolated and recombined to increase the efficiency of bioremediation . Both PAHs and their metabolites can affect the degradation of PAHs. For example, through co-metabolism, microorganisms can utilize poorly available PAHs in the presence of readily available PAHs as carbon and energy sources . The biodegradation of creosote, like any PAH mixture, is hampered by the relatively small number of microorganisms capable of this process . It is usually necessary to use bioaugmentation, i.e., to inoculate the contaminated area or water with consortia specialised strains with high biodegradation potential . Biodegradation of PAHs can be assisted by various methods. These include combined stimulation with sodium acetate/phthalic acid , biosurfactants for oil spills , interactions with biocarbon and co-composting with animal manures . Another solution is to use physicochemical methods aimed at pre-oxidation or the decomposition of PAHs . One of the effective chemical methods for many micropollutants including pharmaceuticals and pesticides , preservative agents , dyes as well as PAHs is ozonation. It is well known that ozone is a strong oxidant that can react with chemical compounds directly (molecular ozone as a main oxidant) and indirectly (hydroxyl radicals generated via ozone decomposition are main oxidants). It should be noted that ozone is a selective oxidant and primarily attacks chemicals that are electronically dense, e.g., containing an aromatic ring, while hydroxyl radicals are non-selective species. Ozonation has been applied for PAHs removal from different media, including water, soil, as well as waste-activated sludge [ , , , ]. Ji and coworkers studied the ozonation of petroleum hydrocarbons in seawater . They observed the rapid oxidation of PAHs at various gaseous ozone concentrations and the acceleration of their degradation in the presence of the oil dispersant and with increasing salinity. Important roles in the degradation process are played by both direct and indirect ozonation . Ozone-based processes can be applied in combination with other methods such as biodegradation [ , , ], which allows for the reduction in the cost of degradation. The literature illustrates different configurations of ozonation and biological degradation, such as ozonation-biodegradation, biodegradation-ozonation, and biodegradation-ozonation-biodegradation sequences [ , , , ]. During the pre-ozonation process, soluble and biodegradable compounds to support microbial growth and activity can be formed, therefore ozone pre-treatment usually enhances the biodegradability of wastewater . Furthermore, biodegradation can be applied as a polishing step of the treatment . On the other hand, the post-ozonation process can lead to the removal of resistant micropollutant residuals. The different combination of ozonation and biological methods were applied for detoxification of industrial textile wastewater , oil sands process water , pharmaceutical wastewater , olive mill wastewater , and urban wastewater . The literature includes examples of the application of ozonation combined with the biodegradation process for PAHs removal [ , , , ]. For example, Kulik et al. studied the removal of PAHs from creosote contaminated sand and peat . To the best of our knowledge, the topic of the combined ozonation and biodegradation of PAHs using microbial strains isolated from environmental samples has not been significantly investigated in recent years. The aim of this study was to check the degradation of creosote hydrocarbons through the utilization of the combined treatment process: ozonation and biodegradation. Ozonation was applied as the first step. As a biodegradation culture, the consortium of five microbial strains isolated from hydrocarbon-contaminated environmental samples were used. The influence of ozone concentration and the procedure of culture preparation on the removal of PAHs were investigated.
2.1. Materials 2.1.1. Chemicals All chemicals used in the research were of analytical grade and were purchased from Merck (Darmstadt, Germany). The nutrient broth was purchased from BTL Sp. z o.o. (Łódź, Poland). The creosote (type B) was purchased from Centrala Obrotu Towarami Masowymi DAW-BYTOM Sp. z o.o. (Bytom, Poland). 2.1.2. Bacterial Strains The biodegradation cultures were inoculated with the consortium of five microbial strains isolated from hydrocarbon-contaminated environmental samples: Pseudomonas sp. MChB (GenBank No. KU563540), Pseudomonas sp. OS4 (GenBank No. KP096512), Raoultella planticola SA2 (GenBank No. KP096517), Achromobacter sp. KW1 (GenBank No. KP096519), and Rahnella aquatilis DA2 (GenBank No. KP096518). 2.2. Methods 2.2.1. Culture Preparation The microorganisms’ cultures were prepared according to the procedure described by Smułek et al. with some modifications. Briefly, bacterial strains were revived in nutrient broth (a portion of biomass per loop in 20 mL of broth), incubated for 48 h at 30 °C, and then the cultures were centrifuged and suspended in saline to give OD 600nm of 0.9–1.0. After equilibration of OD 600nm , suspensions of strains were mixed in a ratio of 1:1:1:1:1 v / v . We added the inoculum to the finished cultures at a ratio of 5 mL of suspension per 100 mL of culture. The synthetic wastewater was prepared according to OECD (The Organisation for Economic Co-operation and Development) procedure: “Test No. 303: Simulation Test—Aerobic Sewage Treatment—A: Activated Sludge Units; B: Biofilms, 2001. OECD Guidelines for the Testing of Chemicals, ”, as was described by Zdarta et al. . The making of all samples initially consisted of taking 4 mL of synthetic wastewater and diluting it with demineralized water, followed by sterilization (15 min, 1 atm, 123–126 °C). Ingredients were added to each sample according to the . The pH of the cultures was 5. For Culture 2 and 3, the inoculum was added after ozone treatment (mixed vigorously and left in a dark place for 24 h), and for Culture 4 and 5, the inoculum was added before the addition of creosote and ozonated water. The ozonated water was prepared by ozone bubbling into demineralized water. The ozone was generated from oxygen in the BMT 802 N ozonator (BMT Messtechnik GMBH, Berlin, Germany). The ozone concentration in the gas stream in the inlet of the reactor was measured by a BMT 964BT ozone analyzer (BMT Messtechnik GMBH, Berlin, Germany). The ozone concentration in the ozonated water was determined using spectrophotometric measurements (with a Jasco V-630 apparatus). 2.2.2. Biodegradation Tests The five cultures in three replicates (15 incubation bottles) were incubated on a rotary shaker for 12 weeks at 30 °C in the dark. Samples were taken every two weeks for the analysis of residual hydrocarbons, total organic carbon, chemical oxygen demand, and microbial activity. 2.2.3. Total Organic Carbon and Chemical Oxygen Demand The total organic carbon (TOC) was determined using a two-stage process with the usage of a TOC-X5 shaker (HACH LANGE Sp. z o.o., Wrocław, Poland), a LT200 thermostat (HACH LANGE Sp. z o.o., Wrocław, Poland), a DR 3900 photometer (HACH LANGE Sp. z o.o., Wrocław, Poland) and LCK386 cuvettes (HACH LANGE Sp. z o.o., Wrocław, Poland ). In a two-stage process, the total inorganic carbon was first expelled with the help of the TOC-X5 shaker, and the TOC was then oxidized to carbon dioxide using the thermostat. The carbon dioxide passed through a membrane into the indicator cuvette, where it caused a color change to occur, and this was evaluated with a photometer. The detailed description of this method can be found in the working procedure . The chemical oxygen demand (COD) was determined using a standard dichromate method with a LT200 thermostat (HACH LANGE Sp. z o.o., Ames, IA, USA), a DR 3900 photometer (HACH LANGE Sp. z o.o., Ames, IA, USA), and LCK314 cuvettes (HACH LANGE Sp. z o.o., Wrocław, Poland). A detailed description of this method can be found in the working procedure . 2.2.4. Gas Chromatographic Analyses For the quantitative and qualitative analysis of hydrocarbons, 30 mL of the cultures (mixed and shaken earlier to provide homogeneity of samples) were used. The samples were placed in 50 mL plastic tubes and extracted with 5 mL of hexane. They were then transferred to chromatographic vials and analyzed as follows: helium as a carrier gas (1 mL min −1 ); oven temperature program: 40 °C for the first 2 min and then increased to 300 °C at a 15 °C per min −1 rate (the final temperature was kept for 15 min). The analyses were conducted using a Pegasus 4D GCxGC-TOFMS (LECO, St. Joseph, MI, USA) equipped with a BPX-5 column (60 m, 250 μm, 0.25 μm). The obtained chromatograms are presented in the . The quantity of the residual hydrocarbons was measured using a calibration curve, and the final content was corrected based on the values determined for the control and abiotic samples. 2.2.5. Microbial Activity Measurements Measurements of the bacteria cells’ activity were performed using a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay (MTT) according to the method described by . Briefly, 0.5 mL of cultures were mixed with 0.05 mL of 5 g L −1 MTT solution and incubated for 48 h. After incubation, the cultures were centrifuged at 11,000× g . The supernatant was discarded, and the pellet (the formazan precipitate formed by viable cells) was dissolved with 0.25 mL of propane-2-ol. Afterward, the samples were centrifuged again at 4000× g , and the supernatant was analysed on a UV-VIS spectrophotometer at 560 nm. 2.2.6. Statistical Analyses and Initial Reaction Rates The results presented in the study were calculated as an average value from at least three independent experiments. A variance analysis and Student’s t -test were used to determine the statistical significance of differences between the average values. The differences were considered statistically significant at p < 0.05. The initial reaction rates were calculated through the use of a differentiating exponential curve that fitted experimental points (concentration, time) at a correlation factor higher than 0.98. The calculations were conducted using Excel 2019 (Microsoft Office Professional 2019) and OriginPro 2022 (OriginLab 2022) software.
2.1.1. Chemicals All chemicals used in the research were of analytical grade and were purchased from Merck (Darmstadt, Germany). The nutrient broth was purchased from BTL Sp. z o.o. (Łódź, Poland). The creosote (type B) was purchased from Centrala Obrotu Towarami Masowymi DAW-BYTOM Sp. z o.o. (Bytom, Poland). 2.1.2. Bacterial Strains The biodegradation cultures were inoculated with the consortium of five microbial strains isolated from hydrocarbon-contaminated environmental samples: Pseudomonas sp. MChB (GenBank No. KU563540), Pseudomonas sp. OS4 (GenBank No. KP096512), Raoultella planticola SA2 (GenBank No. KP096517), Achromobacter sp. KW1 (GenBank No. KP096519), and Rahnella aquatilis DA2 (GenBank No. KP096518).
All chemicals used in the research were of analytical grade and were purchased from Merck (Darmstadt, Germany). The nutrient broth was purchased from BTL Sp. z o.o. (Łódź, Poland). The creosote (type B) was purchased from Centrala Obrotu Towarami Masowymi DAW-BYTOM Sp. z o.o. (Bytom, Poland).
The biodegradation cultures were inoculated with the consortium of five microbial strains isolated from hydrocarbon-contaminated environmental samples: Pseudomonas sp. MChB (GenBank No. KU563540), Pseudomonas sp. OS4 (GenBank No. KP096512), Raoultella planticola SA2 (GenBank No. KP096517), Achromobacter sp. KW1 (GenBank No. KP096519), and Rahnella aquatilis DA2 (GenBank No. KP096518).
2.2.1. Culture Preparation The microorganisms’ cultures were prepared according to the procedure described by Smułek et al. with some modifications. Briefly, bacterial strains were revived in nutrient broth (a portion of biomass per loop in 20 mL of broth), incubated for 48 h at 30 °C, and then the cultures were centrifuged and suspended in saline to give OD 600nm of 0.9–1.0. After equilibration of OD 600nm , suspensions of strains were mixed in a ratio of 1:1:1:1:1 v / v . We added the inoculum to the finished cultures at a ratio of 5 mL of suspension per 100 mL of culture. The synthetic wastewater was prepared according to OECD (The Organisation for Economic Co-operation and Development) procedure: “Test No. 303: Simulation Test—Aerobic Sewage Treatment—A: Activated Sludge Units; B: Biofilms, 2001. OECD Guidelines for the Testing of Chemicals, ”, as was described by Zdarta et al. . The making of all samples initially consisted of taking 4 mL of synthetic wastewater and diluting it with demineralized water, followed by sterilization (15 min, 1 atm, 123–126 °C). Ingredients were added to each sample according to the . The pH of the cultures was 5. For Culture 2 and 3, the inoculum was added after ozone treatment (mixed vigorously and left in a dark place for 24 h), and for Culture 4 and 5, the inoculum was added before the addition of creosote and ozonated water. The ozonated water was prepared by ozone bubbling into demineralized water. The ozone was generated from oxygen in the BMT 802 N ozonator (BMT Messtechnik GMBH, Berlin, Germany). The ozone concentration in the gas stream in the inlet of the reactor was measured by a BMT 964BT ozone analyzer (BMT Messtechnik GMBH, Berlin, Germany). The ozone concentration in the ozonated water was determined using spectrophotometric measurements (with a Jasco V-630 apparatus). 2.2.2. Biodegradation Tests The five cultures in three replicates (15 incubation bottles) were incubated on a rotary shaker for 12 weeks at 30 °C in the dark. Samples were taken every two weeks for the analysis of residual hydrocarbons, total organic carbon, chemical oxygen demand, and microbial activity. 2.2.3. Total Organic Carbon and Chemical Oxygen Demand The total organic carbon (TOC) was determined using a two-stage process with the usage of a TOC-X5 shaker (HACH LANGE Sp. z o.o., Wrocław, Poland), a LT200 thermostat (HACH LANGE Sp. z o.o., Wrocław, Poland), a DR 3900 photometer (HACH LANGE Sp. z o.o., Wrocław, Poland) and LCK386 cuvettes (HACH LANGE Sp. z o.o., Wrocław, Poland ). In a two-stage process, the total inorganic carbon was first expelled with the help of the TOC-X5 shaker, and the TOC was then oxidized to carbon dioxide using the thermostat. The carbon dioxide passed through a membrane into the indicator cuvette, where it caused a color change to occur, and this was evaluated with a photometer. The detailed description of this method can be found in the working procedure . The chemical oxygen demand (COD) was determined using a standard dichromate method with a LT200 thermostat (HACH LANGE Sp. z o.o., Ames, IA, USA), a DR 3900 photometer (HACH LANGE Sp. z o.o., Ames, IA, USA), and LCK314 cuvettes (HACH LANGE Sp. z o.o., Wrocław, Poland). A detailed description of this method can be found in the working procedure . 2.2.4. Gas Chromatographic Analyses For the quantitative and qualitative analysis of hydrocarbons, 30 mL of the cultures (mixed and shaken earlier to provide homogeneity of samples) were used. The samples were placed in 50 mL plastic tubes and extracted with 5 mL of hexane. They were then transferred to chromatographic vials and analyzed as follows: helium as a carrier gas (1 mL min −1 ); oven temperature program: 40 °C for the first 2 min and then increased to 300 °C at a 15 °C per min −1 rate (the final temperature was kept for 15 min). The analyses were conducted using a Pegasus 4D GCxGC-TOFMS (LECO, St. Joseph, MI, USA) equipped with a BPX-5 column (60 m, 250 μm, 0.25 μm). The obtained chromatograms are presented in the . The quantity of the residual hydrocarbons was measured using a calibration curve, and the final content was corrected based on the values determined for the control and abiotic samples. 2.2.5. Microbial Activity Measurements Measurements of the bacteria cells’ activity were performed using a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay (MTT) according to the method described by . Briefly, 0.5 mL of cultures were mixed with 0.05 mL of 5 g L −1 MTT solution and incubated for 48 h. After incubation, the cultures were centrifuged at 11,000× g . The supernatant was discarded, and the pellet (the formazan precipitate formed by viable cells) was dissolved with 0.25 mL of propane-2-ol. Afterward, the samples were centrifuged again at 4000× g , and the supernatant was analysed on a UV-VIS spectrophotometer at 560 nm. 2.2.6. Statistical Analyses and Initial Reaction Rates The results presented in the study were calculated as an average value from at least three independent experiments. A variance analysis and Student’s t -test were used to determine the statistical significance of differences between the average values. The differences were considered statistically significant at p < 0.05. The initial reaction rates were calculated through the use of a differentiating exponential curve that fitted experimental points (concentration, time) at a correlation factor higher than 0.98. The calculations were conducted using Excel 2019 (Microsoft Office Professional 2019) and OriginPro 2022 (OriginLab 2022) software.
The microorganisms’ cultures were prepared according to the procedure described by Smułek et al. with some modifications. Briefly, bacterial strains were revived in nutrient broth (a portion of biomass per loop in 20 mL of broth), incubated for 48 h at 30 °C, and then the cultures were centrifuged and suspended in saline to give OD 600nm of 0.9–1.0. After equilibration of OD 600nm , suspensions of strains were mixed in a ratio of 1:1:1:1:1 v / v . We added the inoculum to the finished cultures at a ratio of 5 mL of suspension per 100 mL of culture. The synthetic wastewater was prepared according to OECD (The Organisation for Economic Co-operation and Development) procedure: “Test No. 303: Simulation Test—Aerobic Sewage Treatment—A: Activated Sludge Units; B: Biofilms, 2001. OECD Guidelines for the Testing of Chemicals, ”, as was described by Zdarta et al. . The making of all samples initially consisted of taking 4 mL of synthetic wastewater and diluting it with demineralized water, followed by sterilization (15 min, 1 atm, 123–126 °C). Ingredients were added to each sample according to the . The pH of the cultures was 5. For Culture 2 and 3, the inoculum was added after ozone treatment (mixed vigorously and left in a dark place for 24 h), and for Culture 4 and 5, the inoculum was added before the addition of creosote and ozonated water. The ozonated water was prepared by ozone bubbling into demineralized water. The ozone was generated from oxygen in the BMT 802 N ozonator (BMT Messtechnik GMBH, Berlin, Germany). The ozone concentration in the gas stream in the inlet of the reactor was measured by a BMT 964BT ozone analyzer (BMT Messtechnik GMBH, Berlin, Germany). The ozone concentration in the ozonated water was determined using spectrophotometric measurements (with a Jasco V-630 apparatus).
The five cultures in three replicates (15 incubation bottles) were incubated on a rotary shaker for 12 weeks at 30 °C in the dark. Samples were taken every two weeks for the analysis of residual hydrocarbons, total organic carbon, chemical oxygen demand, and microbial activity.
The total organic carbon (TOC) was determined using a two-stage process with the usage of a TOC-X5 shaker (HACH LANGE Sp. z o.o., Wrocław, Poland), a LT200 thermostat (HACH LANGE Sp. z o.o., Wrocław, Poland), a DR 3900 photometer (HACH LANGE Sp. z o.o., Wrocław, Poland) and LCK386 cuvettes (HACH LANGE Sp. z o.o., Wrocław, Poland ). In a two-stage process, the total inorganic carbon was first expelled with the help of the TOC-X5 shaker, and the TOC was then oxidized to carbon dioxide using the thermostat. The carbon dioxide passed through a membrane into the indicator cuvette, where it caused a color change to occur, and this was evaluated with a photometer. The detailed description of this method can be found in the working procedure . The chemical oxygen demand (COD) was determined using a standard dichromate method with a LT200 thermostat (HACH LANGE Sp. z o.o., Ames, IA, USA), a DR 3900 photometer (HACH LANGE Sp. z o.o., Ames, IA, USA), and LCK314 cuvettes (HACH LANGE Sp. z o.o., Wrocław, Poland). A detailed description of this method can be found in the working procedure .
For the quantitative and qualitative analysis of hydrocarbons, 30 mL of the cultures (mixed and shaken earlier to provide homogeneity of samples) were used. The samples were placed in 50 mL plastic tubes and extracted with 5 mL of hexane. They were then transferred to chromatographic vials and analyzed as follows: helium as a carrier gas (1 mL min −1 ); oven temperature program: 40 °C for the first 2 min and then increased to 300 °C at a 15 °C per min −1 rate (the final temperature was kept for 15 min). The analyses were conducted using a Pegasus 4D GCxGC-TOFMS (LECO, St. Joseph, MI, USA) equipped with a BPX-5 column (60 m, 250 μm, 0.25 μm). The obtained chromatograms are presented in the . The quantity of the residual hydrocarbons was measured using a calibration curve, and the final content was corrected based on the values determined for the control and abiotic samples.
Measurements of the bacteria cells’ activity were performed using a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay (MTT) according to the method described by . Briefly, 0.5 mL of cultures were mixed with 0.05 mL of 5 g L −1 MTT solution and incubated for 48 h. After incubation, the cultures were centrifuged at 11,000× g . The supernatant was discarded, and the pellet (the formazan precipitate formed by viable cells) was dissolved with 0.25 mL of propane-2-ol. Afterward, the samples were centrifuged again at 4000× g , and the supernatant was analysed on a UV-VIS spectrophotometer at 560 nm.
The results presented in the study were calculated as an average value from at least three independent experiments. A variance analysis and Student’s t -test were used to determine the statistical significance of differences between the average values. The differences were considered statistically significant at p < 0.05. The initial reaction rates were calculated through the use of a differentiating exponential curve that fitted experimental points (concentration, time) at a correlation factor higher than 0.98. The calculations were conducted using Excel 2019 (Microsoft Office Professional 2019) and OriginPro 2022 (OriginLab 2022) software.
3.1. Creosote Hydrocarbons Degradation The crucial parameter describing the self-cleaning potential of different tested systems was total hydrocarbon content ( ). In Culture 1, without the ozonation process, the biodegradation was carried out relatively slowly during the first six weeks; when the hydrocarbons content dropped to 6.4 ppm, then degradation ratio increased, and after a further two weeks it reached 2.8 ppm, and then the process slowed down again. Finally, after 12 weeks, the creosote hydrocarbons content was 1.5 ppm. The cultures with pre-ozonation (No. 2 and 3) presented another process rate. The lower ozone concentration in Culture 2 promoted a more intensive biodegradation during the first weeks (reaching 5.2 ppm after four weeks) and then slowed down to reach a hydrocarbon concentration of 2.4 ppm at the end. Culture 3 showed the least biodegradation throughout the experiment, although the process accelerated significantly in the last two weeks. Finally, the hydrocarbon concentration in this Culture was 1.7 ppm, which was almost the same as in Culture 1. The cultures where the ozonation process was conducted after the inoculation with bacteria (No. 4 and 5) were characterized with relatively low biodegradation effectiveness. During the experiment, the rate of hydrocarbons removal was average, but they later appeared to be less effective than other cultures. After 12 weeks, the hydrocarbon concentrations were 5.3 ppm and 3.9 ppm for Culture 4 and 5, respectively. The determined initial reaction rate of creosote decay equaled 0.00425 ± 0.000248, 0.01634 ± 0.00000163, 0.00181 ± 0.0000353, 0.0056 ± 0.000081 and 0.0109 ± 0.00017 ppm h −1 for cultures 1, 2, 3, 4, and 5, respectively. These results indicated that biodegradation using Culture 2 was most effective. An additional perspective involved the monitoring of selected PAHs ( ), which were present in creosote oil. For the majority of the investigated hydrocarbons, Culture 2 appeared to be the most effective, especially in the first month of the experiment when the decrease in PAHs concentration was the most visible. The second best system was Culture 1, which was significantly less effective than Culture 2. Considering the biodegradability of the analyzed PAHs, the most resistant to degradation appeared to be quinoline, probably because of the higher toxicity caused by the presence of nitrogen in an aromatic ring. The less resistant to biodegradation were acenaphthylene and benz[a]anthracene, which were almost totally degraded after 12 weeks. However, it must be mentioned that the results refer to the primary biodegradation process, which indicates the decay of the initial form of the hydrocarbon molecule. 3.2. TOC and COD during Biodegradation shows the changes in TOC during the biodegradation of creosote oil in different cultures. The decrease in TOC was observed for all cultures. The application of a higher ozone concentration (Cultures 3 and 5) caused a slightly greater TOC decrease in the first four weeks. The highest decay of this parameter after 12 weeks was observed for Culture 2 (almost 80% TOC was removed). It should be noted that the decrease of TOC in culture 1 was smaller than in other cultures, which proved that ozonation increased the efficiency of the biodegradation of creosote oil and its transformation products. It is likely caused by the transformation of PAHs into less toxic and more biodegradable compounds during ozonation. The biodegradation was also compared in terms of COD reduction ( ). In the case of the culture without the addition of ozone (Culture 1), after 2 weeks a decrease in this parameter was observed. After 4 weeks, the COD had increased, and after 12 weeks it had decreased by 60%. A significant increase of COD has been noted for Culture 2 during 8 weeks of biodegradation. The removal of COD after 12 weeks equaled 60, 23, 62, 25 and 57% for culture 1, 2, 3, 4 and 5, respectively. 3.3. Changes in Microbial Activity during Biodegradation Additional knowledge about the biodegradation process was provided thanks to the measurement of microbial activity in the cultures. The different cultures differ both in the values of cell metabolic activity and in the profile of changes in this parameter over time ( ). In particular, this concerns the point at which cellular activity reaches its maximum. In the case of Culture 1, activity during the first weeks was stable and then rose to reach the maximum at 6 weeks, after which it declined steadily until the end. The cultures with pre-ozonation (Culture 2 and 3) were characterized with higher initial cell activity. The maximum values were measured at the beginning and after 4 weeks in case of Culture 3 and Culture 2, respectively. The maximum activity was reported at the start for Culture 5 as well. However, the absolute value was nearly 25% lower than for Culture 3. Culture 4 demonstrated stable activity during the first two weeks, after which it decreased gradually.
The crucial parameter describing the self-cleaning potential of different tested systems was total hydrocarbon content ( ). In Culture 1, without the ozonation process, the biodegradation was carried out relatively slowly during the first six weeks; when the hydrocarbons content dropped to 6.4 ppm, then degradation ratio increased, and after a further two weeks it reached 2.8 ppm, and then the process slowed down again. Finally, after 12 weeks, the creosote hydrocarbons content was 1.5 ppm. The cultures with pre-ozonation (No. 2 and 3) presented another process rate. The lower ozone concentration in Culture 2 promoted a more intensive biodegradation during the first weeks (reaching 5.2 ppm after four weeks) and then slowed down to reach a hydrocarbon concentration of 2.4 ppm at the end. Culture 3 showed the least biodegradation throughout the experiment, although the process accelerated significantly in the last two weeks. Finally, the hydrocarbon concentration in this Culture was 1.7 ppm, which was almost the same as in Culture 1. The cultures where the ozonation process was conducted after the inoculation with bacteria (No. 4 and 5) were characterized with relatively low biodegradation effectiveness. During the experiment, the rate of hydrocarbons removal was average, but they later appeared to be less effective than other cultures. After 12 weeks, the hydrocarbon concentrations were 5.3 ppm and 3.9 ppm for Culture 4 and 5, respectively. The determined initial reaction rate of creosote decay equaled 0.00425 ± 0.000248, 0.01634 ± 0.00000163, 0.00181 ± 0.0000353, 0.0056 ± 0.000081 and 0.0109 ± 0.00017 ppm h −1 for cultures 1, 2, 3, 4, and 5, respectively. These results indicated that biodegradation using Culture 2 was most effective. An additional perspective involved the monitoring of selected PAHs ( ), which were present in creosote oil. For the majority of the investigated hydrocarbons, Culture 2 appeared to be the most effective, especially in the first month of the experiment when the decrease in PAHs concentration was the most visible. The second best system was Culture 1, which was significantly less effective than Culture 2. Considering the biodegradability of the analyzed PAHs, the most resistant to degradation appeared to be quinoline, probably because of the higher toxicity caused by the presence of nitrogen in an aromatic ring. The less resistant to biodegradation were acenaphthylene and benz[a]anthracene, which were almost totally degraded after 12 weeks. However, it must be mentioned that the results refer to the primary biodegradation process, which indicates the decay of the initial form of the hydrocarbon molecule.
shows the changes in TOC during the biodegradation of creosote oil in different cultures. The decrease in TOC was observed for all cultures. The application of a higher ozone concentration (Cultures 3 and 5) caused a slightly greater TOC decrease in the first four weeks. The highest decay of this parameter after 12 weeks was observed for Culture 2 (almost 80% TOC was removed). It should be noted that the decrease of TOC in culture 1 was smaller than in other cultures, which proved that ozonation increased the efficiency of the biodegradation of creosote oil and its transformation products. It is likely caused by the transformation of PAHs into less toxic and more biodegradable compounds during ozonation. The biodegradation was also compared in terms of COD reduction ( ). In the case of the culture without the addition of ozone (Culture 1), after 2 weeks a decrease in this parameter was observed. After 4 weeks, the COD had increased, and after 12 weeks it had decreased by 60%. A significant increase of COD has been noted for Culture 2 during 8 weeks of biodegradation. The removal of COD after 12 weeks equaled 60, 23, 62, 25 and 57% for culture 1, 2, 3, 4 and 5, respectively.
Additional knowledge about the biodegradation process was provided thanks to the measurement of microbial activity in the cultures. The different cultures differ both in the values of cell metabolic activity and in the profile of changes in this parameter over time ( ). In particular, this concerns the point at which cellular activity reaches its maximum. In the case of Culture 1, activity during the first weeks was stable and then rose to reach the maximum at 6 weeks, after which it declined steadily until the end. The cultures with pre-ozonation (Culture 2 and 3) were characterized with higher initial cell activity. The maximum values were measured at the beginning and after 4 weeks in case of Culture 3 and Culture 2, respectively. The maximum activity was reported at the start for Culture 5 as well. However, the absolute value was nearly 25% lower than for Culture 3. Culture 4 demonstrated stable activity during the first two weeks, after which it decreased gradually.
The collected results draw attention to the complexity of the coupled ozonation-biodegradation process. The evaluation of the process effectiveness depends on the time perspective and parameter measured. Many studies describe results obtained after a shorter time, such as 12 or 28 days . However, based on our previous studies on the biodegradation of creosote oil , we concluded that extending the time of the experiment would provide more promising results. This is because in earlier studies we observed that it took a relatively long time for microorganisms to adapt and that they had a relatively late entry into the logarithmic and stationary phases at the same time as they had a high level of multiplication of microorganisms. Thus, by observing the process for longer than a month, we can take into account the influence of microorganisms with slower growth rates but more efficient biodegradation, i.e., ultimately higher efficiency. presents a comparison of the results obtained in this work with other studies. The ozone concentration applied in this work was much lower in comparison with the literature data, whereas the obtained reduction percentage of COD, total hydrocarbons, and individual PAHs were comparable or even higher than in other studies. Considering PAHs disappearance in the cultures, pre-ozonation in small doses was the most successful approach, especially in first weeks of the experiment. However, the final concentration was comparable with that obtained in cultures with no pre-treatment. An analysis of the primary biodegradation of several PAHs monitored reveals analogical observations. The higher doses of pre-ozonation and ozonation after microbial inoculation appeared to be effective. The causes cannot be explained simply by the negative effect of ozonation on bacterial cells, because it is contradicted by measurements of cell activity. Rather, the time shift of the measured maximum cell metabolic activity indicates the prolonged bacterial adaption to the new conditions. Moreover, the high reduction of COD in cultures with high oxidation doses and biodegradation only bring an additional perspective. The highest decay of TOC was achieved in the culture that was pre-ozonated with small ozone doses. According to the research performed by Chen et al., petroleum hydrocarbons can be effectively removed from soils by using sequential biodegradation and ozonation . The application of this combined method made it possible to achieve the 40–45% removal of TOC and to meet the regulatory standard for this parameter, while biodegradation alone was unable to meet the standard. Moreover, dissolved organic carbon was the dominant substrate for microorganisms when readily biodegradable hydrocarbons were no longer available. In the case of relatively biodegradable petroleum hydrocarbons, pre-ozonation and post-ozonation strategies were equally effective, while post-ozonation was more efficient for the less biodegradable hydrocarbons . Ozonation can improve the bioavailability and biodegradability of contaminants through the oxidation of organics with unsaturated functional groups. Ozone in water solutions is decomposed into oxygen relatively quickly. The ozone decay rate is mainly influenced by the folowing parameters: temperature, pH, as well as the presence of organic and inorganic compounds and other medium components. The half-life of ozone in distilled water at ambient temperature is approximately 25 s at pH 10, 17 min at pH 7, and 7 h at pH 4 . The ozone concentration at the beginning of the biodegradation tests equaled to 0.76 ± 0.17 and 6.63 ± 0.68 mg L −1 , respectively ( ). Taking into account the parameters of the experiments, the pH of the cultures that equaled 5, and presence of organic compounds, it can be assumed that the ozone was completely consumed after about 4 h of the reaction. It is commonly known that many bacterial strains are very sensitive to ozone action, therefore it is used in disinfection processes. It is worthy of note that ozone is also consumed in the decomposition of other medium components, such as PAHs. In cases when the reaction of ozone with chemicals proceeds very rapidly, the adverse effect of ozone on bacteria is hardly visible. The obtained results showed that cultures with the addition of ozone were characterized by higher initial cell activity in comparison with the culture without ozone addition ( ). However, this parameter was lower in cases of the addition of bacteria soon after water was ozonated with a higher ozone concentration (Culture 5) compared to those where a lower ozone concentration was used (Culture 4). Thus, the concentration of ozone is an important factor that influenced this adverse effect on the bacteria. According to the literature reduction rate of Pseudomonas aeruginosa cells increased with the amount of transferred ozone dose increasing from 11 to 45 mg L −1 . Ozonation with an ozone dose of 5 and 10 mg L −1 was not able to eliminate Pseudomonas from the secondary effluents of two wastewater treatment plants . The performed studies showed that bacterial activity reached its maximum and then decreased. This suggests that in the cultures there are no more organic compounds that could be degraded, or that there could be some toxic products which have an adverse effect on the bacteria. The brief ozonation of pyrene significantly decreases the toxicity of its intermediates, as evidenced by the increased biological oxygen demand measured in the effluent and a decrease in E. coli inhibition . The degradation of pyrene can be initiated by O 3 via ring cleavage, and further oxidation ensued via reactions with both ozone (direct ozonation) and hydroxyl radicals (indirect ozonation) until complete mineralization was reached . According to the research performed by Yang et al., the toxicity of phenanthrene byproducts formed during ozonation was also lower than for that of the parent compound . Microbubble ozonation completely removed the acute toxicity of benzo[a]pyrene to Daphnia magna , whereas the toxicity reduction by macrobubble ozonation was not consistent, owing possibly to toxic degradation products . Cui et al. investigated the coking wastewater, which contains high concentrations of cyanide, phenols, pyridine, quinoline, and polycyclic aromatic hydrocarbons. The toxicity of this wastewater effluent in cases of the application of the simultaneous combination of ozonation and biodegradation was 327% and 306% lower than that of the individual biodegradation and ozontion system, respectively . In the case of the ozonation system, the toxicity fluctuated slightly and presented an increasing trend, indicating that more toxic intermediates were produced .The use of ozone treatment as a step to aid the biodegradation of persistent organic pollutants has already been tested in the case of pharmaceutics, such as with tetracycline or citalopram . In the referenced articles, a reduced toxicity of the pollutants was observed; however, the authors noted the thread of transformation products. The ozonation-biodegradation was applied to urban wastewater as well. In ozonation-biodegradation, in addition to the pollutants content, the concentration of soluble organic matter also affects the effectiveness of the process . The studies performed by Bernal-Martinez et al. showed that ozonation pre-treatment of sludge increased the biodegradability or bioavailability of each PAH, and the PAH removals are correlated to their solubility . The extended polycondensation of benzene rings in polyaromatic pollutants confers them with high chemical stability and low water solubility which limits their bioavailability and removal rates. Therefore, low-molecular weight compounds can be degraded faster than the highest ones . One of the crucial factors that can be important in understanding the results was the application of bacterial strains that were primarily isolated from hydrocarbon-contaminated soils. They were used to degrade PAHs, but they had to adapt to degrade PAHs ozonation products. It is not uncommon for PAHs and their metabolites to affect the degradation of PAHs. According to Zhang et al. , the co-metabolism of microorganisms enables the utilization of poorly available PAHs in the presence of readily available ones that provide a source of carbon and energy. For example, Micrococcus sp. did not degrade anthracene, pyrene, or fluoranthene before naphthalene and phenanthrene were added, which increased the degradation of all PAHs tested. Furthermore, replacing the monoculture with co-cultures increases the bioavailability of contaminants due to microbial enzymes induced by readily available contaminants . In this study, both the presence of several bacterial strains and the presence of 10 different PAHs undoubtedly enhanced the biodegradation of PAHs in wastewater. Special attention should be paid to the superior degradation of PAHs with more benzene rings (e.g., acenaphthylene, phenanthrene, pyrene, and benz[a]anthracene).
This study presents the results of PAHs degradation using an ozonation-biodegradation hybrid system. The performed experiments showed that pre-ozonation increased the efficiency of the biodegradation of creosote oil. The combination of biodegradation and ozone pre-treatment using a small dose of ozone was the most effective in PAHs removal, especially in the first weeks of the biodegradation experiment. However, the final concentration was comparable with that obtained in culture with no pre-treatment. The determined initial reaction rate of creosote decay equaled 0.00425 ± 0.000248, 0.01634 ± 0.00000163, 0.00181 ± 0.0000353, 0.0056 ± 0.000081 and 0.0109 ± 0.00017 ppm h −1 for Cultures 1, 2, 3, 4, and 5, respectively. The highest decay of TOC after 12 weeks was observed for the culture pre-ozonated using a small dose of ozone (almost 80% TOC was removed). The removal of COD after 12 weeks equaled 60, 23, 62, 25 and 57%, for Cultures 1, 2, 3, 4, and 5, respectively. Quinoline appeared to be most resistant to degradation. Less resistant to biodegradation were acenaphthylene and benz[a]anthracene, which were degraded almost totally after 12 weeks. The applied cultures differ both in the values of cell metabolic activity and in the profile of changes in this parameter over time. The maximum cellular activity was reached at 6, 4, 0, 2 and 0 weeks of biodegradation for Cultures 1, 2, 3, 4, and 5, respectively.
|
Primary Cutaneous B-Cell Lymphomas with Large Cell Morphology: A Practical Review
|
e37482cd-4cfa-424e-8f1f-691abb5c660e
|
10094092
|
Anatomy[mh]
|
Most primary cutaneous lymphomas are constituted by small cells. T-cell lymphomas, deriving from mycosis fungoides, are the most common. However, lymphomas with large cell morphology may involve the skin, as a primary localization, or secondary involvement. Primary cutaneous large cell lymphomas include mainly T-cell lymphomas, such as large cell transformation of mycosis fungoides, anaplastic lymphoma, lymphomatoid papulosis type C, some cases of aggressive cytotoxic cutaneous lymphoma and peripheral T-cell lymphoma, NOS . Primary cutaneous B-cell lymphomas (PCBCLs) with large cell morphology are a heterogeneous group of rare neoplasms, constituted histologically by various proportions of large cells with the morphological features of centroblasts and/or immunoblasts, with the interposition of other lymphoid and non-lymphoid cells (centrocytes, follicular dendritic cells, small reactive lymphocytes) . The classification of PCBCLs with large cell morphology (which will remain unaltered in the upcoming edition of WHO classification ) includes primary cutaneous diffuse large B-cell lymphoma, leg type (PCDLBCL-LT), primary cutaneous follicle center lymphoma (PCFCL) and primary cutaneous diffuse large B-cell lymphoma, other (PCDLBCL-O) . PCBCLs are lymphomas arising in the skin, without clinical and instrumental evidence of systemic disease at the time of the diagnosis, constituted by at least 25–30% of large cells (cells at least 4 times the size of a small lymphocyte). The diagnosis of PCBCLs cannot be made only based on histological findings, since it requires the demonstration of skin-limited disease at the time of the diagnosis, by total-body instrumental staging . The differential diagnosis between this group of neoplasms may be challenging due to the rarity of these neoplasms and the overlapping morphological and immunohistochemical features of the subtypes. Immunohistochemistry is always needed and may be helpful, but it may not be enough to distinguish the different histotypes, which overlap in immunohistochemical features, and the differential diagnosis is often based on the combination of morphological and immunohistochemical details. Nevertheless, the correct diagnosis is mandatory, since prognosis and therapy are significantly different, depending largely on the histological diagnosis . Moreover, improved knowledge of the molecular features of systemic B-cell lymphomas, including the gene rearrangements with clinical significance, has led in recent years to further investigation into the molecular landscape of PCBCLs with large cell morphology. This review summarizes current knowledge on the clinical, morphological, immunohistochemical, and molecular findings of PCBCLs with large cell morphology, functioning as a practical guide for the diagnosis in the clinical setting. A literature search was performed in PubMed and Web of Science for studies about the clinical, pathological, and molecular findings of PCBCLs with large cell morphology, using the following search terms: primary cutaneous B-cell lymphoma, primary cutaneous diffuse large B-cell lymphoma leg type, primary cutaneous follicle center lymphoma, primary cutaneous diffuse large B-cell lymphoma other. Only English publications were included.
2.1. Clinical Findings and Behavior PCDLBCL-LT is an infrequent neoplasm, representing 1–4% of all cutaneous lymphomas and about 20% of all PCBCLs . Elderly women are affected more often than males. Lower extremities are the most common sites of onset of PCDLBCL-LT, but it may arise in other cutaneous sites in about 10–15% of cases. Localizations to head or neck are rare. Patients often show multiple lesions, represented by reddish plaques or nodules, which may be ulcerated, but a single lesion may be possible. PCDLBCL-LT has an intermediate prognosis, with 5-year disease-specific survival of about 60% and an overall survival of about 50% . Progression-free survival is about 41.8 months in patients treated with chemotherapy and radiotherapy . Cutaneous dissemination and relapses are common, and systemic dissemination develops in about 17–47% of cases, mainly to lymph nodes. Adverse prognostic features include multiple lesions, ulceration, CDKN2A inactivation, and MYC rearrangement . The prognostic role of MYD88 mutation and bcl2 expression is debated. Front-line therapy for PCDLBCL-LT includes R-CHOP with or without involved-site radiation therapy . Some data suggest that immune checkpoint inhibitors may have a role in the treatment of relapsed/refractory cases . 2.2. Histological Findings PCDLBCL-LT is defined as a primary cutaneous lymphoma composed exclusively of centroblasts and immunoblasts, most commonly arising in the leg . As suggested by the definition, the most important diagnostic clue of this lymphoma is its cellular composition as assessed by histological examination, with the neoplastic population constituted only by centroblasts and immunoblasts . The lymphoid proliferation is organized in diffuse sheets, occupying diffusely the dermis, with variable involvement of the sub-cutaneous fat. Upon low power field observation, the visual impression is that of a monotonous cell population, being constituted by only two cytotypes. Importantly, there are only a few small lymphocytes (reactive T-cells) in the background, often confined to perivascular areas, and follicular dendritic cells are absent. When the lesion is ulcerated, an inflammatory component of granulocytes and plasma cells may be present at the bottom of the ulcer. Immunohistochemically, PCDLBCL-LT expresses all the pan-B markers, such as CD20, CD19, and Pax5. Pan-T markers may be helpful to quantify the reactive T-cells in the background, and CD21 and CD23 may help to confirm the absence of follicular dendritic cells. The prototype immunohistochemical profile of PCDLBCL-LT includes positivity to MUM1, bcl6 (often slight intensity), bcl2, FOXP1, and IgM, and negativity for CD10, CD30, and CD5. However, CD10 may be positive, often with slight intensity. On the other hand, bcl6, MUM1, bcl2, FOXP1, and IgM may be negative. Although PCDLBCL-LT is considered an activated B-cell lymphoma and it usually shows a non-germinal center (GC) phenotype according to Hans’s algorithm, a GC-phenotype is possible and not infrequently observed. Bcl2 is expressed in most cases (from 94–100% in the largest series), and its positivity is helpful for the diagnosis of PCDLBCL-LT, since PCGCL is usually (but not always) negative . The proliferation index (Ki67) is high—more than 50%. C-myc is expressed in 67–83% of cases . The main pathological findings for diagnostic purpose are summarized in . Immunohistochemical findings are listed in . A similar percentage of cases (69–83%) co-express bcl2 and c-myc (dual expressors) . Although the prognostic significance of dual expressor status is still not entirely clear, it displayed a significantly worse overall survival and specific survival in the study of Menguy et al. . 2.3. Molecular Findings The most common molecular alteration found in PCDLBCL-LT is related to the B-cell receptor pathway activation and dysregulation of the NF-kB signaling pathway, which promotes cell survival, proliferation, and the inhibition of apoptosis in lymphoid cells . Activating mutations of MYD88 (mainly MYD88 L265P) and CD79B (mainly ITAM domain) are the most common hot spot mutations in PCDLBCL-LT and are both useful for diagnostic purposes. Activating mutations of the coiled-coil domain of CARD11 and heterozygous deletions of A20 are also common . PCDLBCL-LT harbors the molecular signature of “activated B-cell-like” lymphomas showing a terminal B-cell differentiation blockage and resembles primary large B-cell lymphoma of immune-privileged sites, such as in the central nervous system and testis lymphomas. Mareschal et al. analyzed the molecular profile of 20 cases of PCDLBCL-LT, showing a very restricted set of highly recurrent mutations, including MYD88 (75% of cases), PIM1 (70% of cases), CD79B (40% of cases), and others (TBL1XR1, MYC, CREBBP, IRF4, HIST1H1E). Moreover, the authors reported some common genetic losses involving CDKN2A/2B, TNFAIP3/A20, PRDM1, TCF3, and CIITA . The inactivation of CDKN2A (by either deletion or promoter hypermethylation) has a prognostic role in PCDLBCL-LT, and cases harboring homozygous deletion have poorer prognosis than cases harboring heterozygous deletion . PCDLBCL-LT may harbor translocations of MYC, BCL6, and BCL2, but BCL2 translocation seems to be rare. In the largest series, translocations of MYC, BCL6, and BCL2 were demonstrated in 5–44% of cases, 4–29% of cases, and 0–12% of cases, respectively . In the study of Schrader et al., 14 out of 44 (32%) cases showed MYC rearrangement, and 2 cases (4%) showed BCL6 rearrangement . Double and triple hit status is possible but infrequent (19% and 6% of cases, respectively) . The prognostic role of these cytogenetic alterations in PCDLBCL-LT is unclear. In the study of Schrader et al., MYC translocation correlated with disease-specific survival and disease-free survival, but not with overall survival . Recent data suggest that immune escaping may be an important ontogenetic mechanism in PCDLBCL-LT, which harbors recurrent alterations in immune-evasion genes, such as PDL1/PDL2 translocations, leading to the overexpression of PD-L1 or PD-L2 proteins .
PCDLBCL-LT is an infrequent neoplasm, representing 1–4% of all cutaneous lymphomas and about 20% of all PCBCLs . Elderly women are affected more often than males. Lower extremities are the most common sites of onset of PCDLBCL-LT, but it may arise in other cutaneous sites in about 10–15% of cases. Localizations to head or neck are rare. Patients often show multiple lesions, represented by reddish plaques or nodules, which may be ulcerated, but a single lesion may be possible. PCDLBCL-LT has an intermediate prognosis, with 5-year disease-specific survival of about 60% and an overall survival of about 50% . Progression-free survival is about 41.8 months in patients treated with chemotherapy and radiotherapy . Cutaneous dissemination and relapses are common, and systemic dissemination develops in about 17–47% of cases, mainly to lymph nodes. Adverse prognostic features include multiple lesions, ulceration, CDKN2A inactivation, and MYC rearrangement . The prognostic role of MYD88 mutation and bcl2 expression is debated. Front-line therapy for PCDLBCL-LT includes R-CHOP with or without involved-site radiation therapy . Some data suggest that immune checkpoint inhibitors may have a role in the treatment of relapsed/refractory cases .
PCDLBCL-LT is defined as a primary cutaneous lymphoma composed exclusively of centroblasts and immunoblasts, most commonly arising in the leg . As suggested by the definition, the most important diagnostic clue of this lymphoma is its cellular composition as assessed by histological examination, with the neoplastic population constituted only by centroblasts and immunoblasts . The lymphoid proliferation is organized in diffuse sheets, occupying diffusely the dermis, with variable involvement of the sub-cutaneous fat. Upon low power field observation, the visual impression is that of a monotonous cell population, being constituted by only two cytotypes. Importantly, there are only a few small lymphocytes (reactive T-cells) in the background, often confined to perivascular areas, and follicular dendritic cells are absent. When the lesion is ulcerated, an inflammatory component of granulocytes and plasma cells may be present at the bottom of the ulcer. Immunohistochemically, PCDLBCL-LT expresses all the pan-B markers, such as CD20, CD19, and Pax5. Pan-T markers may be helpful to quantify the reactive T-cells in the background, and CD21 and CD23 may help to confirm the absence of follicular dendritic cells. The prototype immunohistochemical profile of PCDLBCL-LT includes positivity to MUM1, bcl6 (often slight intensity), bcl2, FOXP1, and IgM, and negativity for CD10, CD30, and CD5. However, CD10 may be positive, often with slight intensity. On the other hand, bcl6, MUM1, bcl2, FOXP1, and IgM may be negative. Although PCDLBCL-LT is considered an activated B-cell lymphoma and it usually shows a non-germinal center (GC) phenotype according to Hans’s algorithm, a GC-phenotype is possible and not infrequently observed. Bcl2 is expressed in most cases (from 94–100% in the largest series), and its positivity is helpful for the diagnosis of PCDLBCL-LT, since PCGCL is usually (but not always) negative . The proliferation index (Ki67) is high—more than 50%. C-myc is expressed in 67–83% of cases . The main pathological findings for diagnostic purpose are summarized in . Immunohistochemical findings are listed in . A similar percentage of cases (69–83%) co-express bcl2 and c-myc (dual expressors) . Although the prognostic significance of dual expressor status is still not entirely clear, it displayed a significantly worse overall survival and specific survival in the study of Menguy et al. .
The most common molecular alteration found in PCDLBCL-LT is related to the B-cell receptor pathway activation and dysregulation of the NF-kB signaling pathway, which promotes cell survival, proliferation, and the inhibition of apoptosis in lymphoid cells . Activating mutations of MYD88 (mainly MYD88 L265P) and CD79B (mainly ITAM domain) are the most common hot spot mutations in PCDLBCL-LT and are both useful for diagnostic purposes. Activating mutations of the coiled-coil domain of CARD11 and heterozygous deletions of A20 are also common . PCDLBCL-LT harbors the molecular signature of “activated B-cell-like” lymphomas showing a terminal B-cell differentiation blockage and resembles primary large B-cell lymphoma of immune-privileged sites, such as in the central nervous system and testis lymphomas. Mareschal et al. analyzed the molecular profile of 20 cases of PCDLBCL-LT, showing a very restricted set of highly recurrent mutations, including MYD88 (75% of cases), PIM1 (70% of cases), CD79B (40% of cases), and others (TBL1XR1, MYC, CREBBP, IRF4, HIST1H1E). Moreover, the authors reported some common genetic losses involving CDKN2A/2B, TNFAIP3/A20, PRDM1, TCF3, and CIITA . The inactivation of CDKN2A (by either deletion or promoter hypermethylation) has a prognostic role in PCDLBCL-LT, and cases harboring homozygous deletion have poorer prognosis than cases harboring heterozygous deletion . PCDLBCL-LT may harbor translocations of MYC, BCL6, and BCL2, but BCL2 translocation seems to be rare. In the largest series, translocations of MYC, BCL6, and BCL2 were demonstrated in 5–44% of cases, 4–29% of cases, and 0–12% of cases, respectively . In the study of Schrader et al., 14 out of 44 (32%) cases showed MYC rearrangement, and 2 cases (4%) showed BCL6 rearrangement . Double and triple hit status is possible but infrequent (19% and 6% of cases, respectively) . The prognostic role of these cytogenetic alterations in PCDLBCL-LT is unclear. In the study of Schrader et al., MYC translocation correlated with disease-specific survival and disease-free survival, but not with overall survival . Recent data suggest that immune escaping may be an important ontogenetic mechanism in PCDLBCL-LT, which harbors recurrent alterations in immune-evasion genes, such as PDL1/PDL2 translocations, leading to the overexpression of PD-L1 or PD-L2 proteins .
3.1. Clinical Findings and Behavior PCFCL mainly affects middle-aged adults. Lesions are typically located on the head and neck or the upper trunk and are constituted by solitary or grouped papules, plaques, and/or tumors. On approximately 5% of cases, the legs are involved. Furthermore, 15% of patients present multifocal skin lesions. Ulceration may occur. PCFCL is an indolent disease, with a good prognosis. Five-year disease-specific survival is over 95%, and systemic spread is rare. Cutaneous relapses may occur (about 30% of cases) and tend to occur at the site of initial presentation. Cases located to the leg may have a more aggressive behavior . 3.2. Histological Findings PCFCL is defined as a primary cutaneous lymphoma composed of centrocytes and a variable number of centroblasts, with a follicular, follicular and diffuse, or diffuse pattern of growth . The cellular composition (centrocytes and centroblasts), which defines the neoplasms, is the most important clue for the diagnosis . Since the centroblasts may be present in PCDLBCL-LT, centrocytes are mandatory for the diagnosis of PCFCL, and they are the most important clue in the differential diagnosis with other PCBCLs with large cell morphology. The presence of centrocytes is mandatory for the diagnosis of PCFCL, and lymphomas including centroblasts are excluded from this category. Overall, PCFCL is characterized by a dermal/subcutaneous infiltration by admixed centrocytes and centroblasts, often with an evident grenz zone. Variable numbers of reactive T-cells are present in the background. Although intermediate types do exist, PCFCL includes two morphological types: follicular type and diffuse type. The former is organized in follicles, sharing most of its diagnostical clues with the classic systemic follicular lymphoma. It must be differentiated by florid follicular hyperplasia (pseudo-lymphoma), marginal zone lymphoma, and systemic follicular lymphoma. The latter shows a diffuse growth pattern. It must be differentiated mainly by other lymphomas with large cell morphology, first PCDLBCL-LT and PCDLBCL-O. In the follicular type, the follicles are the main source of diagnostic features. Indeed, follicles are homomorphous in both diameter and cellular composition (often with the prevalence of centroblasts), follicular histiocytes are absent, mitoses are few and the proliferation index is relatively low, no signs of follicular polarization are apparent, the mantle zone is attenuated or absent (more evident by IgM IHC), and follicle center cells are present out of the dendritic follicular cells meshwork (more evident by CD21/CD23 and bcl6 IHC) . In the diffuse type, the neoplastic population is constituted by a monotonous population of large centrocytes, some of which may have a multilobated appearance, and centroblasts. There is no evidence of follicles, and follicular dendritic cell meshworks are absent . Immunohistochemically, PCFCL expresses all the pan-B markers, such as CD20, CD19, and Pax5. The prototype immunohistochemical profile of PCFCL includes positivity for bcl6, partial or absent expression of CD10, and negativity for MUM1, CD5, bcl2, and FOXP1; CD3 highlights a variable number of reactive T-cells; CD21 and CD23 highlight a disrupted meshwork of follicular dendritic cells in follicular type. The proliferation index (Ki67) is more often <50%. However, bcl6 and CD10 may both be negative, while MUM1, FOXP1, and bcl2 may be positive, and the proliferation index (Ki67) may be high (>50%) . PCFCL diffuse type is characterized by CD10 negativity and high Ki67, while follicular dendritic cell meshworks are absent, BCL2, MUM1, and FOXP1 are usually negative, and CD5 and CD43 are always negative . C-myc expression is not rare, as it is reported in up to 48% of cases . Although the main diagnostic clues of follicular lymphoma bcl2 expression and BCL2 rearrangement–are often lacking in PCFCL, they may be seen in PCFCL. Bcl2 expression is seen in a significant proportion of cases (up to 38%) and is correlated with a higher risk of cutaneous relapses . Up to 30% of cases co-express c-myc and bcl2 . A low proliferation index (Ki67 < 30%) has been correlated with systemic spread . 3.3. Molecular Findings The molecular landscape of PCFCL is not clearly defined, and actually there is no molecular marker that is able to distinguish this lymphoma from systemic follicular lymphoma or to predict future systemic involvement in PCFCL. However, BCL2 rearrangement is certainly common in systemic follicular lymphoma and rare in PCFCL . BCL2, BCL6, and MYC rearrangements are rare in PCFCL. Indeed, in the series of Menguy et al., BCL6 and MYC rearrangements were found in 1 out of 21 (5%) cases . However, BCL2 rearrangement has been variably reported in different series and may be found in up to 30% of cases . Although the prognostic role of these molecular alterations is not entirely known, they seem not to affect overall survival or specific survival . Zhou et al. recently indagated the molecular findings of skin-restricted PCFCL and the cutaneous involvement of systemic follicular lymphoma (SFL) with concurrent or future systemic involvement . BLC2 rearrangement was found in 17% and 100% of PCFCL and SFL, respectively. By whole-exome sequencing, the authors demonstrated mutations in genes associated with chromatin remodeling in SFL, including CREBBP, KMT2D, and EZH2. On the other hand, PCFCL was characterized by a more heterogeneous molecular landscape, including mutations of TNFRSF14, MYC, JAK3, KRAS, FOXO1, CARD11, RHOA, TET2, SOCS1, and B2M .
PCFCL mainly affects middle-aged adults. Lesions are typically located on the head and neck or the upper trunk and are constituted by solitary or grouped papules, plaques, and/or tumors. On approximately 5% of cases, the legs are involved. Furthermore, 15% of patients present multifocal skin lesions. Ulceration may occur. PCFCL is an indolent disease, with a good prognosis. Five-year disease-specific survival is over 95%, and systemic spread is rare. Cutaneous relapses may occur (about 30% of cases) and tend to occur at the site of initial presentation. Cases located to the leg may have a more aggressive behavior .
PCFCL is defined as a primary cutaneous lymphoma composed of centrocytes and a variable number of centroblasts, with a follicular, follicular and diffuse, or diffuse pattern of growth . The cellular composition (centrocytes and centroblasts), which defines the neoplasms, is the most important clue for the diagnosis . Since the centroblasts may be present in PCDLBCL-LT, centrocytes are mandatory for the diagnosis of PCFCL, and they are the most important clue in the differential diagnosis with other PCBCLs with large cell morphology. The presence of centrocytes is mandatory for the diagnosis of PCFCL, and lymphomas including centroblasts are excluded from this category. Overall, PCFCL is characterized by a dermal/subcutaneous infiltration by admixed centrocytes and centroblasts, often with an evident grenz zone. Variable numbers of reactive T-cells are present in the background. Although intermediate types do exist, PCFCL includes two morphological types: follicular type and diffuse type. The former is organized in follicles, sharing most of its diagnostical clues with the classic systemic follicular lymphoma. It must be differentiated by florid follicular hyperplasia (pseudo-lymphoma), marginal zone lymphoma, and systemic follicular lymphoma. The latter shows a diffuse growth pattern. It must be differentiated mainly by other lymphomas with large cell morphology, first PCDLBCL-LT and PCDLBCL-O. In the follicular type, the follicles are the main source of diagnostic features. Indeed, follicles are homomorphous in both diameter and cellular composition (often with the prevalence of centroblasts), follicular histiocytes are absent, mitoses are few and the proliferation index is relatively low, no signs of follicular polarization are apparent, the mantle zone is attenuated or absent (more evident by IgM IHC), and follicle center cells are present out of the dendritic follicular cells meshwork (more evident by CD21/CD23 and bcl6 IHC) . In the diffuse type, the neoplastic population is constituted by a monotonous population of large centrocytes, some of which may have a multilobated appearance, and centroblasts. There is no evidence of follicles, and follicular dendritic cell meshworks are absent . Immunohistochemically, PCFCL expresses all the pan-B markers, such as CD20, CD19, and Pax5. The prototype immunohistochemical profile of PCFCL includes positivity for bcl6, partial or absent expression of CD10, and negativity for MUM1, CD5, bcl2, and FOXP1; CD3 highlights a variable number of reactive T-cells; CD21 and CD23 highlight a disrupted meshwork of follicular dendritic cells in follicular type. The proliferation index (Ki67) is more often <50%. However, bcl6 and CD10 may both be negative, while MUM1, FOXP1, and bcl2 may be positive, and the proliferation index (Ki67) may be high (>50%) . PCFCL diffuse type is characterized by CD10 negativity and high Ki67, while follicular dendritic cell meshworks are absent, BCL2, MUM1, and FOXP1 are usually negative, and CD5 and CD43 are always negative . C-myc expression is not rare, as it is reported in up to 48% of cases . Although the main diagnostic clues of follicular lymphoma bcl2 expression and BCL2 rearrangement–are often lacking in PCFCL, they may be seen in PCFCL. Bcl2 expression is seen in a significant proportion of cases (up to 38%) and is correlated with a higher risk of cutaneous relapses . Up to 30% of cases co-express c-myc and bcl2 . A low proliferation index (Ki67 < 30%) has been correlated with systemic spread .
The molecular landscape of PCFCL is not clearly defined, and actually there is no molecular marker that is able to distinguish this lymphoma from systemic follicular lymphoma or to predict future systemic involvement in PCFCL. However, BCL2 rearrangement is certainly common in systemic follicular lymphoma and rare in PCFCL . BCL2, BCL6, and MYC rearrangements are rare in PCFCL. Indeed, in the series of Menguy et al., BCL6 and MYC rearrangements were found in 1 out of 21 (5%) cases . However, BCL2 rearrangement has been variably reported in different series and may be found in up to 30% of cases . Although the prognostic role of these molecular alterations is not entirely known, they seem not to affect overall survival or specific survival . Zhou et al. recently indagated the molecular findings of skin-restricted PCFCL and the cutaneous involvement of systemic follicular lymphoma (SFL) with concurrent or future systemic involvement . BLC2 rearrangement was found in 17% and 100% of PCFCL and SFL, respectively. By whole-exome sequencing, the authors demonstrated mutations in genes associated with chromatin remodeling in SFL, including CREBBP, KMT2D, and EZH2. On the other hand, PCFCL was characterized by a more heterogeneous molecular landscape, including mutations of TNFRSF14, MYC, JAK3, KRAS, FOXO1, CARD11, RHOA, TET2, SOCS1, and B2M .
PCDLBCL-O is not a specific pathological entity, but a heterogeneous group of B-cells neoplasms characterized by large cell morphology, which do not meet the diagnostic criteria for PCDLBCL-LT or PCFCL. This “umbrella” category includes rare specific sub-types, such as intravascular large B-cell lymphoma and cutaneous plasmablastic lymphoma and diffuse large B-cell lymphomas occurring primarily in the skin (primary cutaneous diffuse large B-cell lymphomas not otherwise specified, PCDLBCL, NOS). 4.1. Clinical Findings and Behavior PCDLBCL, NOS is a poorly defined entity, and it is not entirely known whether it is an independent entity with respect to systemic DLBCL. PCDLBCL, NOS affects adults and elderly patients, with a median age of 70 years . Although PCDLBCL, NOS is located most often on the trunk or head/neck, the anatomic distribution of the neoplasm is wide, and it arises on the leg in a variable percentage of cases. In the series of Kodama et al., five out of nine (55.6%) cases were located on the leg . PCDLBCL, NOS has an intermediate prognosis, with a 5-year overall survival and disease-specific survival of about 50% . 4.2. Histological Findings PCDLBCL, NOS is a diagnosis by exclusion and includes cases with large cell morphology not fulfilling diagnostic criteria for PCDLBCL-LT or PCFCL . Histologically, PCDLBCL, NOS is constituted by a diffuse lymphoid population mainly of large cells, with variable morphology, including centroblasts, immunoblasts, and medium-sized centrocytoid cells. The lymphoid population is located in the dermis, but it extends to the hypodermis in about 60% of cases. The arrangement of the neoplasm may be nodular or diffuse, but it is more often mixed nodular and diffuse. Although the reactive T-cell population in the background is variable and may be scant in some cases, it is usually moderate or intense. A meshwork of follicular dendritic cells may be present. Similar to systemic DLBCL, immunohistochemistry is variable in the case of PCDLBCL, NOS, and a prototype immunophenotype is not defined. CD10, bcl6, and MUM1 are expressed in about 30%, 80%, and 25% of cases, respectively . A “non-GC” phenotype according to Hans algorithm is more frequently observed . Moreover, bcl2 and c-myc are expressed in about 65% and 35% of cases, respectively; however, the co-expression of these two markers is significantly more common in PCDLBCL-LT than in PCDLBCL, NOS . The proliferation index (Ki67) is variable, and the mean value is about 40%. BCL2 and MYC translocation may be present in PCDLBCL, NOS, but the double-hit status is significantly more common in PCDLBCL-LT. 4.3. Molecular Findings Molecular findings of PCDLBCL, NOS are not entirely known, and it is debatable whether the molecular landscape of this neoplasm is significantly different from systemic DLBCL. Weissinger Se et al. have recently indagated the molecular alteration in a series of primary extranodal lymphomas, including 16 PCDBCL, NOS . The most common mutated genes in PCDLBCL, NOS were MYD88 (50%), CD79B (37%), CARD11 (6%), and BTK (6%). MYD88 and CD79B mutations are involved in B-cell receptor (BCR) activation and are significantly associated with a non-GC phenotype in accordance with Hans algorithm. In addition, MYD88 mutation seems to be significantly associated with a worse prognosis . Alterations of PDL1/2 locus (9p24.1) are present in a variable number of cases, mainly as relative gain. Relative loss and polysomy of 9p24.1 have also been reported .
PCDLBCL, NOS is a poorly defined entity, and it is not entirely known whether it is an independent entity with respect to systemic DLBCL. PCDLBCL, NOS affects adults and elderly patients, with a median age of 70 years . Although PCDLBCL, NOS is located most often on the trunk or head/neck, the anatomic distribution of the neoplasm is wide, and it arises on the leg in a variable percentage of cases. In the series of Kodama et al., five out of nine (55.6%) cases were located on the leg . PCDLBCL, NOS has an intermediate prognosis, with a 5-year overall survival and disease-specific survival of about 50% .
PCDLBCL, NOS is a diagnosis by exclusion and includes cases with large cell morphology not fulfilling diagnostic criteria for PCDLBCL-LT or PCFCL . Histologically, PCDLBCL, NOS is constituted by a diffuse lymphoid population mainly of large cells, with variable morphology, including centroblasts, immunoblasts, and medium-sized centrocytoid cells. The lymphoid population is located in the dermis, but it extends to the hypodermis in about 60% of cases. The arrangement of the neoplasm may be nodular or diffuse, but it is more often mixed nodular and diffuse. Although the reactive T-cell population in the background is variable and may be scant in some cases, it is usually moderate or intense. A meshwork of follicular dendritic cells may be present. Similar to systemic DLBCL, immunohistochemistry is variable in the case of PCDLBCL, NOS, and a prototype immunophenotype is not defined. CD10, bcl6, and MUM1 are expressed in about 30%, 80%, and 25% of cases, respectively . A “non-GC” phenotype according to Hans algorithm is more frequently observed . Moreover, bcl2 and c-myc are expressed in about 65% and 35% of cases, respectively; however, the co-expression of these two markers is significantly more common in PCDLBCL-LT than in PCDLBCL, NOS . The proliferation index (Ki67) is variable, and the mean value is about 40%. BCL2 and MYC translocation may be present in PCDLBCL, NOS, but the double-hit status is significantly more common in PCDLBCL-LT.
Molecular findings of PCDLBCL, NOS are not entirely known, and it is debatable whether the molecular landscape of this neoplasm is significantly different from systemic DLBCL. Weissinger Se et al. have recently indagated the molecular alteration in a series of primary extranodal lymphomas, including 16 PCDBCL, NOS . The most common mutated genes in PCDLBCL, NOS were MYD88 (50%), CD79B (37%), CARD11 (6%), and BTK (6%). MYD88 and CD79B mutations are involved in B-cell receptor (BCR) activation and are significantly associated with a non-GC phenotype in accordance with Hans algorithm. In addition, MYD88 mutation seems to be significantly associated with a worse prognosis . Alterations of PDL1/2 locus (9p24.1) are present in a variable number of cases, mainly as relative gain. Relative loss and polysomy of 9p24.1 have also been reported .
Diagnosis of PCBCLs with large cell morphology is mandatory for the correct management of the patients, as this group of neoplasms includes both indolent and aggressive subtypes and necessitates different therapies. The distinction between PCBCLs and secondary localization to the skin from a systemic lymphoma always needs clinical and instrumental evidence. The immunohistochemical and molecular features of PCBCLs are not entirely specific for each subtype, and a comprehensive evaluation of all clinical, histological, immunohistochemical, and molecular findings is needed. In some cases, morphological features are still the fundamental basis of the diagnosis, and a “prototype” immunophenotype is a helpful finding . More data are needed to establish whether PCDLBCL, NOS should be classified as an independent entity. Although T-cell lymphomas are the most common primary cutaneous lymphomas, dermatologists and pathologists should be familiar with PCBCLs with large cell morphology.
|
Defensins of
|
73c8fda8-ff52-48a6-ad4e-89053dd9e99a
|
10094115
|
Debridement[mh]
|
Damage to the skin and subcutaneous tissue is a common problem associated with injuries as well as chronic systemic processes causing disturbance in blood supply and oxygenation of tissues. This problem is more and more common and affects nearly 2% of the population of developed countries, which is a serious socio-economic issue . The healing of uncomplicated wounds is subject to complex and dynamic processes involving successive phases related to hemostasis, inflammation, proliferation, and tissue remodelling . The repair phases usually proceed in a predictable manner, resulting in tissue healing without the need for significant intervention. Wounds that do not respond to treatment in accordance with accepted standards and remain in a prolonged inflammatory-proliferative phase are defined as difficult to heal . Prolonged healing processes lasting more than 6 weeks give the basis to label the wound as chronic (the exception of 14 days or more is a wound in the course of a diagnosed diabetic foot) . The wound hygiene concept developed by European experts was based on the assumption that biofilm-forming microbes are the main cause of delayed healing in 60–90% of cases, which is noticeable just a few days after the potential injury [ , , , ]. Biofilm is defined as a highly structured, three-dimensional cluster of microorganisms (bacteria or fungi) embedded in a self-produced extracellular polymeric substance (EPS) [ , , , ]. The formation of a biofilm system is a multi-stage process, depending on the structure and physicochemical properties of the colonized surface. Detachment of bacterial cells from the formed structure and their circulation with blood or other body fluids is both the last stage of biofilm development and the beginning of the expansion of new surfaces . The biofilm matrix surrounding the bacteria makes them tolerant to harsh conditions and resistant to antimicrobial treatment. The emergence of antibiotic resistance in them reduces the effectiveness of the treatment. The cells that make up the biofilm have different properties than those existing in free form. Planktonic phenotype infections are more aggressive and violent. Nevertheless, the metabolic activity of pathological cells is higher and the risk of antibiotic resistance is lower. Its formation affects both biotic and abiotic surfaces; it may also not adhere to any surface [ , , , ]. The available antibiotics may be ineffective in treating these infections due to their higher minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) values, which can cause in vivo toxicity . The basic local action resulting from the wound hygiene concept is systematic debridement, associated with the elimination of devitalized tissues from the wound surface. This process can be carried out by means of various methods and techniques as well as substances with an antiseptic effect recommended by scientific societies . Over the last decade Lucilia sericata medical larvae have been claimed to be “miracle therapeutic maggots” due to their manifold biochemical properties that stimulate healing processes in a wound. Isolating chemical substances from maggot excretions and secretions gives greater possibilities to develop research on the use of defensins to stimulate healing processes in wounds of different aetiologies. Several randomized trials and meta-analyses confirm high effectiveness of secretions and excretions produced by larvae in the process of wound debridement and healing . Excretions and secretions (ES) of the larvae contribute to the elimination of bacteria and stimulate repair processes. The anti-biofilm and antibacterial effect of protein substances excreted by the larvae is visible during local therapy. Tissue renewal and reconstruction is clearly seen. The authors indicate that (ES) strongly inactivates Pseudomonas aeruginosa , methicillin-resistant Staphylococcus aureus (MRSA), and Streptococcus A and B [ , , ]. The aim of the study was to present the use of Lucilia sericata medical maggots in chronic wounds located in the area of the lower legs on the example of three selected clinical cases.
Out of a group of 30 patients with lower leg ulcers in the course of chronic venous insufficiency (CVI) treated in the wound care clinic in 2021, 3 female patients, aged 64, 68, and 87 years (mean age 73 years), were randomly selected by means of Excel pseudo-random number generator. The selected subjects demonstrated features of regression in the healing process (no epithelialization; yellow fibrinous devitalized tissue in the wound; they were not qualified for surgical treatment with tissue graft; their nutritional status was normal; blood biochemical parameters (hemoglobin, albumin, creatinine, glucose) and markers of systemic infection (CRP, PLT) were within normal range; the ankle brachial index (ABI) was over 0.8; compression therapy was implemented). In the microbiological evaluation, Staphylococcus aureus (+++) (Case I and II) and Pseudomonas Aeruginosa (+++) (Case III) were found. The time since the occurrence of the wounds ranged from 2 to 10 years. The treatment in the clinic did not exceed 12 months (2–12 months). Microbiological evaluations were collected from wounds 7–10 days before the planned implementation of MDT. Re-inspection was performed after 30 days. In every case, the recommended debridement methods were used (scraping, active dressings, antiseptic gels). The patients were informed orally and in writing about the scope of potential complications and therapeutic measures for which they consented based on the Helsinki declaration (Bioethics Committee 2017). The level of biodebridement acceptance was assessed using the MDT acceptance questionnaire ; each of the respondents presented low values in the questionnaire assessment, and therefore a closed form of larvae in the biobag was implemented. They also had the opportunity to directly contact the person responsible for monitoring the therapeutic process by phone. Maggot debridement therapy (MDT) in the biobag 10 × 10 (approx. 120 pcs of larvae) (Biofenicja) Biomantis, Krakow, Poland, was implemented. The application and supervision were in line with the guidelines of PTLR (Polish Wound Treatment Society) 2020 . During the three-day therapy, the dressing was moistened with sterile 0.9% NaCl solution once a day; no compression therapy was used during this period. A visual assessment of the wound condition was performed before and after debridement assessing the ratio of viable (red) granulation tissue to (yellow) devitalized/necrotic tissue. Based on the percentage of wound contamination with necrosis, the “debridement index” was calculated in all patients before and after debridement by means of the Equation: debridement index = 100 − x 1 x 2 × 100 where: x 1 —the percentage of necrosis and purulent exudate before treatment. x 2 —the percentage of necrosis and purulent exudate after treatment. The obtained results were classified into separate percentage ranges, where: 0—no debridement of necrotic tissues (lack of therapeutic effect). 10–30%—poor wound debridement (unsatisfactory therapeutic effect). 40–80%—average debridement (good therapeutic effect). 90–100%—complete debridement of the wound (very good therapeutic effect) .
Description of the Cases Case I: Case I was a woman aged 64 capable of self-care (score 80 in Barthel Index) with a history of CVI in the left lower leg wound for over 10 years. For 10 months she was treated at the wound treatment clinic, and the wound was reduced by 50%. For several weeks, inhibition of granulation processes, reduction of exudate, and hard yellow devitalized tissue were present in the wound. At the time of qualification for the study, the lesion area was over 50 cm 2 , of full-skin thickness damage, classified as a yellow wound (according to RYB), with scarce exudate (in the microbiological assessment Staphylococcus aureus (++) strain MSSA (methicillin-sensitive Staphylococcus aureus) was found), and without reported pain. Additionally, compression therapy II° and auxiliary mesotherapy (once a week with a collagen-based preparation) of pressure were implemented. A biobag (100 larvae) was used for a period of 3 days, and each day the dressing was inspected. During the therapy, the patient did not report any pain above 3 points (NRS); the wound debridement was 50% (good therapeutic effect). Then, foam dressings plus an antiseptic gel were implemented along with the continuation of compression therapy with a change of sequence every 3 days and scraping the wound once a week during the follow-up at the clinic. After 14 days of MDT, hydroactive dressings were introduced due to the symptoms of the so-called “dry wound” ( ), the patient was offered to consider NPWT (negative pressure wound therapy). Poor healing and reduction of the wound area was observed within 21 days of MDT. Case II: Case II was an 87-year-old woman capable of self-care (score 90 in Barthel Index) with a history of CVI, cardiostimulator, and novel oral anticoagulants (NOAC), with an ulcerated wound in the right lower leg for about 4 years. The patient was treated at the wound treatment clinic for about 3 months. On examination, a full-thickness wound of the skin over 50 cm 2 was found which was red–yellow (according to RYB), with a Wound at Risk (WAR) score 3, no features of healing, scarce exudate (in the microbiological assessment Staphylococcus aureus (++) was found). Additionally, II° compression therapy was initiated, and the wound was mechanically debrided and then silver foam dressings plus antiseptic gels were introduced. A biobag (3 × 100 larvae) was used on the prepared medium for a period of 3 days and monitored every 24 h. The patient did not report any significant pain symptoms: 2–3 points (NRS/VAS) during the therapy on day “0”, and an increase in the level of pain was observed in the following days to 7–8 points (NRS/VAS). On the third day of therapy, bleeding was noted from the wound, the wound was revised, the biobags were evacuated, and a hemostatic dressing was applied. Wound debridement was within 70% (good therapeutic effect). Then, alginate plus foam dressings and an antiseptic gel were implemented along with the continuation of II° compression therapy and the sequence of dressing changes every 3 days; wound scraping was performed once a week during the inspection ( ). Slow progression of healing and granulation growth within the wound was noted during 21 days of MDT. Next, the patient was prepared for NPWT therapy. Case III: Case III was a 68-year-old woman capable of self-care (score 80 in Barthel Index) with a history of CVI in the left lower leg wound for over 2 years (not referred to compression therapy). The patient treated at the wound treatment clinic for about 2 months. Swollen limbs, full-thickness wound of the skin over 50 cm 2 which was red–yellow (according to RYB), WAR above 3 points, lack of healing features, medium exudate in the microbiological assessment of Pseudomonas Aeruginosa (+++). Additionally, II compression therapy was initiated, mechanical debridement was performed, and then antiseptic dressings based on povidone iodine were implemented (PVP-I), plus foam dressings. A biobag (2 × 100 larvae) was used for 3 days and the dressing was inspected each day. Then, foam dressings plus an antiseptic gel were implemented along with the continuation of II° compression therapy with a change in sequence every 3 days and scraping the wound once a week during the follow-up at the clinic ( ). A visible healing process and reduction of the wound area were observed within 21 days after MDT application. Detailed data on wounds before MDT and o follow-up after 21 days after MDT implementation are presented in and .
Case I: Case I was a woman aged 64 capable of self-care (score 80 in Barthel Index) with a history of CVI in the left lower leg wound for over 10 years. For 10 months she was treated at the wound treatment clinic, and the wound was reduced by 50%. For several weeks, inhibition of granulation processes, reduction of exudate, and hard yellow devitalized tissue were present in the wound. At the time of qualification for the study, the lesion area was over 50 cm 2 , of full-skin thickness damage, classified as a yellow wound (according to RYB), with scarce exudate (in the microbiological assessment Staphylococcus aureus (++) strain MSSA (methicillin-sensitive Staphylococcus aureus) was found), and without reported pain. Additionally, compression therapy II° and auxiliary mesotherapy (once a week with a collagen-based preparation) of pressure were implemented. A biobag (100 larvae) was used for a period of 3 days, and each day the dressing was inspected. During the therapy, the patient did not report any pain above 3 points (NRS); the wound debridement was 50% (good therapeutic effect). Then, foam dressings plus an antiseptic gel were implemented along with the continuation of compression therapy with a change of sequence every 3 days and scraping the wound once a week during the follow-up at the clinic. After 14 days of MDT, hydroactive dressings were introduced due to the symptoms of the so-called “dry wound” ( ), the patient was offered to consider NPWT (negative pressure wound therapy). Poor healing and reduction of the wound area was observed within 21 days of MDT. Case II: Case II was an 87-year-old woman capable of self-care (score 90 in Barthel Index) with a history of CVI, cardiostimulator, and novel oral anticoagulants (NOAC), with an ulcerated wound in the right lower leg for about 4 years. The patient was treated at the wound treatment clinic for about 3 months. On examination, a full-thickness wound of the skin over 50 cm 2 was found which was red–yellow (according to RYB), with a Wound at Risk (WAR) score 3, no features of healing, scarce exudate (in the microbiological assessment Staphylococcus aureus (++) was found). Additionally, II° compression therapy was initiated, and the wound was mechanically debrided and then silver foam dressings plus antiseptic gels were introduced. A biobag (3 × 100 larvae) was used on the prepared medium for a period of 3 days and monitored every 24 h. The patient did not report any significant pain symptoms: 2–3 points (NRS/VAS) during the therapy on day “0”, and an increase in the level of pain was observed in the following days to 7–8 points (NRS/VAS). On the third day of therapy, bleeding was noted from the wound, the wound was revised, the biobags were evacuated, and a hemostatic dressing was applied. Wound debridement was within 70% (good therapeutic effect). Then, alginate plus foam dressings and an antiseptic gel were implemented along with the continuation of II° compression therapy and the sequence of dressing changes every 3 days; wound scraping was performed once a week during the inspection ( ). Slow progression of healing and granulation growth within the wound was noted during 21 days of MDT. Next, the patient was prepared for NPWT therapy. Case III: Case III was a 68-year-old woman capable of self-care (score 80 in Barthel Index) with a history of CVI in the left lower leg wound for over 2 years (not referred to compression therapy). The patient treated at the wound treatment clinic for about 2 months. Swollen limbs, full-thickness wound of the skin over 50 cm 2 which was red–yellow (according to RYB), WAR above 3 points, lack of healing features, medium exudate in the microbiological assessment of Pseudomonas Aeruginosa (+++). Additionally, II compression therapy was initiated, mechanical debridement was performed, and then antiseptic dressings based on povidone iodine were implemented (PVP-I), plus foam dressings. A biobag (2 × 100 larvae) was used for 3 days and the dressing was inspected each day. Then, foam dressings plus an antiseptic gel were implemented along with the continuation of II° compression therapy with a change in sequence every 3 days and scraping the wound once a week during the follow-up at the clinic ( ). A visible healing process and reduction of the wound area were observed within 21 days after MDT application. Detailed data on wounds before MDT and o follow-up after 21 days after MDT implementation are presented in and .
One of the major issues concerning the elimination of biofilm is increasing antibiotic resistance. A meta-analysis by Malone et al. confirms the presence of biofilm in 78.2% of chronic wounds . Too frequent and unjustified antibiotic therapy contributes to the development of persisting cells—a subpopulation of surviving cells thus enabling the reconstruction of the biofilm population . Less than 4 years (1944) after the introduction of penicillin to the pharmaceutical market, a β-lactamase-producing strain was noted in over 50% of samples with Staphylococcus aureus , which clearly proved the development of resistance to most of the available antibiotics. Growing antibiotic resistance is accompanied by a negative aspect in terms of time and economy—the therapy becomes longer, additionally increasing the overall costs of patient care and the overall financial expenses of the society . Multi-drug resistant organisms (MDRO) have become a serious threat to civilization, stimulating the search for more effective methods of destroying microorganisms. Insightful observations and research by Sherman and Pechter on the elimination of bacterial flora, including MRSA (methicillin-resistant Staphylococcus aureus ), by larvae placed in the wound opened up new opportunities for researchers and clinicians all over the world . According to the guidelines, the elimination of devitalized necrotic tissue is the basic procedure in the treatment and management of a wound [ , , , ]. Debridement of a wound in which repair processes are inhibited is not subject to unambiguous “rigid” guidelines. The method of non-physiological tissue elimination is multifactorial and related to the area, location and depth of damaged structures, the amount of exudate, concomitant pain, as well as the general condition of the patient and their preferences . Mechanical debridement of the wound (rubbing, scraping, plucking, cutting out) is the simplest, cheapest and fastest method of biofilm elimination performed by trained medical personnel [ , , , , ]. However, in most chronic wounds with coexisting biofilm, more advanced preventive measures are required, and intervention in the form of autolytic biological interventions should be considered, and then the implementation of controlled negative pressure wound therapy in local wound therapy needs to be implemented . Taking into account expert recommendations and clinical observations, three wounds with an area of approx. 50 cm 2 colonized with microorganisms were subjected to biological debridement from devitalized tissues and monitoring of subsequent repair processes. The subjects were randomly selected from a sample of 30 people who had MDT used in the course of topical wound treatment based on specific criteria. Larvae in a biobag were used due to less pain and aversion of patients related to the sight of worms. It was taken into account that the wound debridement options may be less effective, but they reduce the psychological fears of the subjects. The expected quick debridement of the wound was noted, however, a few days after debridement, two out of three wounds (footnotes 1 and 2) were still not healing properly, covering themselves with fibrin. In line with the recommendations, scraping was used, antiseptic was applied before changing the dressing, compressions were performed, and foam dressings soaked with antiseptic. The above observations, which were recorded in the presented patients as well as in the remaining group of respondents, indicate that the implementation of MDT should be a standard and repeated in 7–10 day cycles in order to minimize bacteria and biofilm as well as to stimulate repair processes in the wound. The above clinical observations are consistent with the results of research conducted by Akbas et al., who proved that larval defensins containing fumaric acid, ferulic acid, and p-coumaric acid enhanced migration of fibroblasts and can modulate mRNA expression of some genes related to the wound healing process . In a meta-analysis by Sun et al., MDT not only shortened the healing time but also improved the healing rate of chronic ulcers . The biological debridement of the wound is related to the mechanical and biochemical effects associated with the protein defensins produced by maggots. Mechanical debridement is associated with the removal of necrotic tissue and maggots wriggling in the wound area (which may arise sense of tingling and even pain). Getting rid of necrosis from the wound increases oxygen availability to healthy tissues, facilitates the migration of fibroblasts and keratinocytes, and physically eliminates pathological microorganisms, which reduces the likelihood of their further multiplication [ , , , , ]. Maggot wriggling in the wound bed stimulates neoangiogenesis and granulation. Restoring the functional vascular network is fundamental in diabetes foot syndrome. The study by Sun et al. indicates a number of neoangiogenic factors, including those activating endothelial cells . Analysis of wounds before and after the application of medical maggots proves the promotion of wound healing on many levels, and hence the interesting element of MDT is a broad chemical action based on the secretion and excretion of specific enzymes and their correlation with antibacterial activity (Lucilin, Lucifensin, Lucifensin II, MAMP (Alpha -methoxyphenol), (seraticin) [ , , , , , ] antibiofilm (Chymotrypsin) [ , , ], anti-inflammatory; excretions/secretions—ES) , synergism with selected antibiotics, and immunomodulatory functions . Human skin also has the ability to produce defensins. Antimicrobial peptides (AMPs) are one of the primary mechanisms used by the skin in the early stages of immune defence. AMPs have a broad antibacterial as well as anti-fungal and antiviral effects. In their study, Fijałkowska et al. showed the relationship between the presence of basal cell carcinoma (BCC) cells and the concentration of cathelicidin and β-defensins in plasma (HBD1-3). Elevated levels of cathelicidin and β-defensin 2 are associated with the presence of BCC. The specificity of cathelicidin and β-defensin 2 in the detection of BCC was confirmed, which in the future may be a determinant in the assessment of risk cancer. The authors indicate that these factors are not specific only to this disease and further studies are required . The antimicrobial activity of maggots was also observed in the case of bacteria characterized by high resistance to antibiotics, such as Pseudomonas aeruginosa and Staphylococcus aureus [ , , , ]. The elimination of biofilm in these cases is particularly important due to the high resistance to penetration and the action of the human immune system and to antibiotics . An important discovery in recent years is the fact that lucifensin with antibacterial properties is not found in the digestive tract of Lucilia sericata but in the salivary glands and the fat body. It has also been proven that the larvae generate an immune response to the infectious environment in which they reside by increasing the expression of lucifensin in the fat body, from which it is secreted into the hemolymph. Alatonin excreted by maggots promotes local and temporary proliferation of leukocytes. The presence of it and other substances such as ammonium bicarbonate and urea are believed to keep the wound pH in the alkaline range necessary for the activity of Lucilia sericata proteases during debridement. As the wound heals, the pH shifts from alkaline to neutral until it is acidic in healthy skin. Szczepanowski et al. drew attention to the fact that debridement of wounds with larvae also changes the bacterial flora in the wound; the authors indicate that Proteus mirabilis is a microorganism present in the maggot’s digestive tract and may contaminate the wound . In our material, a microbiological assessment of wounds was performed after a period of not less than 30 days, confirming the eradication of S. aureus but with a scanty growth of Proteus mirabilis . It was not possible to eradicate Pseudomonas aeruginosa in the observed time; however, the wound demonstrated healing processes and was finally healed, and the remaining wounds could not be healed, but their area and depth decreased by more than 50% in the 12-month follow up. In the analyzed cases, wound scrapings were collected for microbiological testing before the implementation of MDT, and no scrapings were collected until 7 days after debridement, which we consider to be a limitation of this study. Observations related to periodic sterilization of the wound or contamination of Proteus mirabilis are our frequent observations as well as those reported by other authors [ , , ]. Microbiological material was collected only 20 days after MDT application and still indicated bacterial colonization. Nezakati et al. reported that larvae therapy has a varied effect on bacterial species, eliminating P. aeruginosa , E. coli, and S. aureus and having the least impact on the growth of Enterococcus , thus stressing that research should be extended in this direction . Complications related to the use of biological debridement are rare and mainly relate to intense mental and somatic sensations (irritation of nerve endings) generating pain and anxiety. In the discussed cases, the subjects tolerated the therapy well on the first day, but on the second and third days they reported the intensification of pain above 4 points, and therefore non-opioid analgesics were prescribed in a protocol. Bleeding was observed in one of the subjects taking NOAC, which was related to the opening of a small vessel and capillary bleeding, which worried the patient. Patients using anticoagulants should not be disqualified from this form of wound debridement; particular attention should be paid to the observations reported by the patient during the therapy. To sum up, in only one of the examined cases, the acceleration of the repair processes and the reduction of the wound area by about 40% were confirmed. In the remaining cases, the appearance of granulation tissue was noted but with no signs of visual epithelialization or reduction of the wound area. Nevertheless, the above observations clearly indicate a positive effect of the action of medical maggots in regenerative processes. Carrying out the overall analysis, the conclusion is that this form of debridement should be used more often (every 7 days) when the wound begins to be covered with devitalized fibrin tissue, which suggests the formation of a bacterial biofilm. Analyzing the cases of patients, an unequivocal conclusion arises that patients after wound preparation (cases I and II) (granulated tissue with minimal bacterial growth) should be re-consulted regarding the possibility of covering with autogeneous tissue, with broadly understood education and indicating the advantages of this method and potential disadvantages. Repair processes in the skin tissue in elderly people with coexisting chronic diseases may be disturbed. Molecular mechanisms associated with chronic venous disease and venous hypertension lead to severe lipodematosclerotic, structural, and functional changes in the lower leg, leading to inflammation, interruption of keratinocyte migration, and abnormal regulation, signalling, and/or expression of specific micro RNAs. In order for local treatment to be effective, the functionality of tissues and circulation in the limbs should be improved (elimination, minimization of microorganisms, reduction of edema, improvement of nutrition) . Skin grafting is one of the most common surgical procedures performed to shorten the healing of a chronic wound, consisting in covering the debrided wound bed with autologous tissue taken from another area of the body. This form of treatment can be implemented in the group of patients with efficient peripheral circulation, not treated with glucocorticoids or cytotoxic drugs. All patients undergoing skin grafting should be educated on the risk of graft failure. Chronic leg ulcers are a significant problem entailing major costs in Western countries and requiring various management strategies . Skin grafting is a treatment method that can reduce the area of chronic leg ulcers or heal them completely in a short period of time, thus improving the patient’s quality of life. Currently, skin grafts play a key role in the context of modern wound healing and tissue regeneration. Although autologous split-thickness skin grafts (STSG) still remain the gold standard in terms of safety and efficacy in the treatment of chronic leg ulcers, in practice, the possibilities may be limited by the patient, i.e., their fear of failure, formation of a larger wound area, reluctance to hospitalize, limited access to professional care, inadequate level related to the method . In our study, wounds were subjected to debridement; however, two patients did not agree to use the method. The oldest patient was not eligible due to age and limited changes in arteries. As a result, the process of local treatment was significantly extended, and the control after a few 12 month follow-ups showed the practical healing of one wound and the reduction of the area of two wounds by more than 50% ( ). According to specialists, MDT is beneficial and perspective in the treatment of wounds. The presented cases were randomly selected and we did not achieve complete success with the use of MDT; however, our observations are consistent with Sherman’s reports . To achieve better results, the application should be repeated more often, which will allow us to reduce the biofilm and improve repair processes in the wound at relatively lower costs and greater patient acceptance of the method. In parallel, as new biomechanisms are discovered, statistical analyses of the speed of debridement and healing of wounds are being carried out. The use of larvae can also reduce the overall cost of treating difficult-to-heal wounds, reducing or excluding hospital stay and antibiotic consumption. In our opinion, medical maggots can be used for debridement and revitalization of tissues before further surgical management related to the surgical treatment of ulcers. During the COVID-19 pandemic, this form of wound debridement becomes not only an alternative but also a necessity for patients who, for various reasons, cannot have a surgically debrided wound in hospital conditions [ , , ].
The wound healing process is complex and multifactorial. All possible recommended local treatments that reduce the bacterial count and create conditions for healing by granulating or covering the wound with tissue material should be considered. Formation of a bacterial biofilm in a chronic wound is one of the main causes of disturbances in its effective healing. Combining procedures (scraping with subsequent antiseptic application, MDT, NPWT) related to wound debridement increases the effectiveness of bacterial biofilm elimination. The use of medical maggots is a safe and effective method of choice, and it enhances the processes of debridement. However, there is still a lack of confirmed, indisputable data on the effectiveness and frequency of use in the process of stimulating healing processes.
The presented cases were randomly selected, and the healing process was presented during a 21-day follow up. Given that these were infected wounds, it is too short a time to draw constructive conclusions. Although there were no spectacular effects, the impact of the MDT method was assessed positively in each case. The patients presented in the study were not qualified for the surgical management of tissues with split-thickness skin graft (STSG) due to the lack of consent to this form of treatment.
|
Enterobacteria Survival, Percolation, and Leaching on Soil Fertilized with Swine Manure
|
dfccf147-687b-4e60-95f0-f20605181360
|
10094324
|
Microbiology[mh]
|
Swine manure is a mixture of urine, feces, food residues, and water used in cleaning activities and contains a high load of microorganisms . The manure microbial composition can vary depending on factors such as age, the type of animal, feed, manure dilution, and the storage technique , with a large bacterial population of saprophytic microorganisms, pathogenic bacteria, viruses, and fungi, as well as gastrointestinal parasite eggs and oocysts . Of manure pathogens, special attention has been given to the enterobacteria group, as they have been pointed out as responsible for more than 2.2 million annual deaths caused by gastrointestinal problems . Among enterobacteria, E. coli has been used as a fecal indicator for decades, but it can also be a pathogen due to its different strains, such as Enteroinvasive E. coli (EIEC), Enterotoxigenic E. coli (ETEC), Enteropathogenic E. coli (EPEC), Enterohemorrhagic E. coli (EHEC), Uropathogenic E. coli (UPEC), and Enteroaggregative E. coli (EaggEC) . All around the world, diarrhea caused by pathogenic E. coli is responsible for 550 million diseases and 230,000 deaths each year . Furthermore, prolonged oral exposure to these fecal contaminants has been linked to environmental enteropathy, a subclinical condition defined by chronic bowel inflammation that can contribute to structural changes in the small intestine and immune dysfunction in the patient . Although the majority of E. coli types are innocuous, some variations are harmful to health and thus raise the risk of waterborne pathogens, such as Salmonella spp. Salmonella spp. are rod-shaped, Gram-negative bacteria, with over 2500 serovars, that colonize the intestinal tract of animals and humans . This bacteria has been reported by the World Health Organization (WHO) to be one of the antibiotic-resistant priority pathogens, requiring urgent strategies for infection management, including the reduction in this bacteria in environmental matrices . Spreading manure on soil as a fertilizer is of special concern since it has been associated with environmental and public health issues due to the presence of zoonotic microorganisms, which can contaminate water and may become associated with vegetable roots and be internalized . For a long time, scientists considered that soil could act as a filter with the potential for self-purification, naturally reducing the pathogen load. However, studies have reported the migration of pathogens in soil, both vertically and horizontally, over a distance as far as 830 m . This migration ability increases the possibility of water contamination . Because of complex interactions among microorganisms and soil constituents, such as organic matter, and porosity, microbial transport across soils can differ . Consequently, certain soil types are more susceptible to microbial migration . In a study by Mantha et al. , Salmonella enterica leached more successfully through sandy soils than through organic soils. Furthermore, higher bacterial survival in organic soils and a rapid decrease in Escherichia coli ( E. coli ) concentrations in more nutrient-poor soil conditions have been reported [ , , , , ]. When compared to sandy soils, which present non-cohesive particles and low organic matter retention, clayey soils offer greater water and nutrient retention capacities, ensuring bacterial survival . Certain studies have shown this effect. For example, a study comparing E. coli O157:H7 survival after cattle slurry was applied to clayey and sandy soils found that survival in clayey soils could last up to 16 weeks compared to 8 weeks in sandy soil . To facilitate the assimilation of manure or other liquid wastes into the soil matrix, agricultural soil is frequently tilled. Due to this practice, the size distribution of macropores changes, and the bulk density of the soil is temporarily reduced . As a result, the soil has a considerable impact on the dynamics of pathogen transfer to groundwater sources. This necessitates a thorough understanding of pathogen movement and survival as they traverse the soil profile . Currently, Brazil is the fourth-largest swine producer in the world, and the generated manure has been applied to soil as a fertilizer for many decades because it contains nutrients beneficial to plants and improves the soil structure . The estimated volume of manure to be considered is determined from the daily volume excreted by the animal (8.6 L) for a herd of 38,212,374 animals . However, in these regions, studies evaluating the survival of enterobacteria in soil are scarce. Therefore, we aimed to evaluate the survival, percolation, and leaching of enterobacteria in clayey soil after fertilization with swine manure. We spiked swine manure with Salmonella enterica Senftenberg ( S. senftenberg ) and E. coli and applied it to clayey soil. We then evaluated the survival, percolation, and leaching of the added enterobacteria.
2.1. Soil and Swine Manure Characterization The soil and swine manure were sampled in the western region of Santa Catarina, Brazil. For the characterization, each sample of soil was dried in an oven (100 °C). The soil was then disaggregated with a mortar and pestle. All processes were carried out according to NBR 6457 . The soil samples were classified by particle size using NBR 7181 , Atterberg’s limits (liquid limit—LL; plastic limit—PL) using NBR 7180 and NBR 6459 , and the weight-specific grain value using ME 093 ( ). The total solid content was quantified using a gravimetric assay . Total organic carbon was quantified using a TOC analyzer (Multi C/N 2100, Analytik Jena, Jena, Germany), at a flow rate of 160 mL min −1 , using oxygen as a carrier. The temperature was set at 900 °C. Briefly, the samples were filtered through 0.45 µm membrane filters (Millipore, Burlington, MA, USA), acidified with phosphoric acid (40% w w −1 ) (Sigma-Aldrich, EUA, St. Louis, MI, USA), and injected (250 µL) immediately into the analyzer. Calibration curves were generated by serial dilution of a stock solution of 1 g L −1 biphthalate (Synth, São Paulo, Brazil). Biological oxygen demand (BOD 5 ) was determined in accordance with 5210-B, Standard Methods for the Examination of Water and Wastewater . Alkalinity was determined by titration using sulfuric acid (0.1 M, Merck, Darmstadt, Germany) as a titrant. Alkalinity was determined as CaCO 3 L −1 : [(M × A × 10,000)/V]; where M is molarity of standardized acid (M); A is the acid volume dispensed to reduce sample pH to 4.5 (mL) and V is total sample volume (mL) . The ascorbic acid colorimetric method was used to measure the concentration of phosphate-P (4500-P, Standard Procedures for the Analysis of Water and Wastewater ). The reagent solution was prepared using 50 mL of sulfuric acid (5 N) (Sigma-Aldrich, St. Louis, MI, USA), 5 mL of antimony potassium tartrate solution (Sigma-Aldrich, St. Louis, MI, USA), 15 mL of ammonium molybdate solution (Synth, São Paulo, Brazil), and 30 mL of ascorbic acid solution (Synth, São Paulo, Brazil). Subsequently, 0.8 mL of this solution was added to 5 mL of the previously filtered samples (0.45 μm membrane filter, Millipore, USA). After 10 min, the absorbance of each sample was measured in a UV-Visible spectrophotometer (Pharo 300, Merck) at 880 nm. The standard curves were generated by serially diluting a stock phosphate-P solution (0.05–0.2 mg-P L −1 ) (Merck, Darmstadt, Germany). Potentiometric analysis using a selective electrode method was used to measure ammoniacal NH 3 -N (4500-NH 3 D, Standard Procedures for the Analysis of Water and Wastewater ). The reagent solution was prepared NaOH/EDTA (10 N) (Neon, Sao Paulo, Brazil) and sodium hydroxide (10 N) (Neon, Sao Paulo, Brazil). The standard curves were generated by serially diluting a stock NH 3 -N solution (0.1–1000 mg-NH 3 -N L −1 ) (Merck, Darmstadt, Germany). The concentrations of nitrite-N and nitrate-N were determined by the N-(1-naphthyl)-ethylenediamine dihydrochloride colorimetric method and were measured at a wavelength of 550 nm (4500-NO 2 - B and 4500-NO 3 - F, Standard Procedures for the Analysis of Water and Wastewater ). Calibration curves were prepared by serial dilution of nitrite-N (0.1–2.0 mg-N L −1 , Merck, Darmstadt, Germany) and nitrate-N (0.1–3.0 mg-N L −1 , Merck, Darmstadt, Germany). pH was determined using a pHmeter (pH–mV, Hanna Instruments, Inc., Woonsocket, RI, USA). The data are shown in . 2.2. Preparation of the Bacterial Inoculum For the preparation of the inoculum spiked in swine manure, standard strains of E. coli and S. enterica serovar Senftemberg were spread on nutrient agar (Kasvi ® ) and incubated at 37 °C for 24 h. Following this, batches of bacterial colonies were gradually added to 10 mL of a 0.9% saline solution until they reached turbidity comparable to the 0.5 McFarland standard (Remel ® ), which contains 1.5 × 10 8 bacteria per mL. This suspension was combined with swine manure and immediately applied to the soil. The volume of swine manure used in this study was comparable to that applied to corn, wheat, and soybean crops (50 m 3 ha −1 ) . 2.3. Microbial Survival Assay The sampled soil was deposited in 1 L reactors that were artificially contaminated with bacterial suspensions containing E. coli and S. senftemberg at concentrations comparable to the 0.5 McFarland standard (Remel ® ). Samples were collected at time zero (T0), daily, and every 5 days until all bacteria died. For E. coli quantification, samples were serially diluted at base 10, then placed at different depths in Chromocult ® Agar , and incubated at 37 °C for 24 h, and the count of typical colonies was determined according to the manufacturer’s instructions. To quantify S. senftemberg , the samples were serially diluted to base 10 in saline solution and placed on XLD Agar for 24 h incubation at 37 °C, followed by standard colony counting according to the manufacturer’s instructions. The results are represented as colony-forming units (CFU). 2.4. Microbial Percolation Assay Three soil column reactors, 70 cm high and 30 cm in diameter, fabricated in polyvinyl chloride tubes (PVC tubes) were used in the experiment. On the side, 1 cm diameter access slots were made at depths of 10, 20, 40, and 60 cm, to allow the soil sample collection during the experiment. The soils were rearranged in the columns in the same order in which they were removed from the original place on the farm (up to 60 cm deep). The columns were left alone for a week to allow the soil to stabilize . Then, soils were fertilized with swine manure artificially contaminated with known concentrations of model bacteria. To monitor the percolation of microorganisms in the soil, 1 g soil samples were collected at different depths . Samples were collected regularly until all bacteria died. 2.5. Microbial Leaching after Rain To carry out the leaching experiments, after fertilization, the soil columns were exposed to a precipitation of 53 mm (at an environmental temperature of 20 °C). This experiment was conducted on a rainy day, representing the real conditions that occurring in the field. The rain volume was measured to calculate the precipitation. A tap was installed at the bottom to allow the leaching liquid to be collected. The leaching liquid from the soil was collected using a sterile collector tube, at times 2, 4, 8, 12, 24, 36, and 48 h after the rain and the enteric bacteria quantified . 2.6. Inactivation Kinetics The inactivation coefficient and the time required for a 1 Log 10 reduction of model bacteria (T 90 = 1/− k ) were calculated according to Ottoson et al. , considering the linear regression curve with r 2 ≥ 0.75. 2.7. Statistical Analysis T -test was used to evaluate the changes in enteric bacteria behavioral profiles in soil over time. One-way analysis of variance (ANOVA) was used to evaluate differences between the depths, using a 95% confidence level, followed by Bonferroni’s multiple comparison test (GraphPad Prism 5.0). The critical p -value for the test was set at ≤0.05.
The soil and swine manure were sampled in the western region of Santa Catarina, Brazil. For the characterization, each sample of soil was dried in an oven (100 °C). The soil was then disaggregated with a mortar and pestle. All processes were carried out according to NBR 6457 . The soil samples were classified by particle size using NBR 7181 , Atterberg’s limits (liquid limit—LL; plastic limit—PL) using NBR 7180 and NBR 6459 , and the weight-specific grain value using ME 093 ( ). The total solid content was quantified using a gravimetric assay . Total organic carbon was quantified using a TOC analyzer (Multi C/N 2100, Analytik Jena, Jena, Germany), at a flow rate of 160 mL min −1 , using oxygen as a carrier. The temperature was set at 900 °C. Briefly, the samples were filtered through 0.45 µm membrane filters (Millipore, Burlington, MA, USA), acidified with phosphoric acid (40% w w −1 ) (Sigma-Aldrich, EUA, St. Louis, MI, USA), and injected (250 µL) immediately into the analyzer. Calibration curves were generated by serial dilution of a stock solution of 1 g L −1 biphthalate (Synth, São Paulo, Brazil). Biological oxygen demand (BOD 5 ) was determined in accordance with 5210-B, Standard Methods for the Examination of Water and Wastewater . Alkalinity was determined by titration using sulfuric acid (0.1 M, Merck, Darmstadt, Germany) as a titrant. Alkalinity was determined as CaCO 3 L −1 : [(M × A × 10,000)/V]; where M is molarity of standardized acid (M); A is the acid volume dispensed to reduce sample pH to 4.5 (mL) and V is total sample volume (mL) . The ascorbic acid colorimetric method was used to measure the concentration of phosphate-P (4500-P, Standard Procedures for the Analysis of Water and Wastewater ). The reagent solution was prepared using 50 mL of sulfuric acid (5 N) (Sigma-Aldrich, St. Louis, MI, USA), 5 mL of antimony potassium tartrate solution (Sigma-Aldrich, St. Louis, MI, USA), 15 mL of ammonium molybdate solution (Synth, São Paulo, Brazil), and 30 mL of ascorbic acid solution (Synth, São Paulo, Brazil). Subsequently, 0.8 mL of this solution was added to 5 mL of the previously filtered samples (0.45 μm membrane filter, Millipore, USA). After 10 min, the absorbance of each sample was measured in a UV-Visible spectrophotometer (Pharo 300, Merck) at 880 nm. The standard curves were generated by serially diluting a stock phosphate-P solution (0.05–0.2 mg-P L −1 ) (Merck, Darmstadt, Germany). Potentiometric analysis using a selective electrode method was used to measure ammoniacal NH 3 -N (4500-NH 3 D, Standard Procedures for the Analysis of Water and Wastewater ). The reagent solution was prepared NaOH/EDTA (10 N) (Neon, Sao Paulo, Brazil) and sodium hydroxide (10 N) (Neon, Sao Paulo, Brazil). The standard curves were generated by serially diluting a stock NH 3 -N solution (0.1–1000 mg-NH 3 -N L −1 ) (Merck, Darmstadt, Germany). The concentrations of nitrite-N and nitrate-N were determined by the N-(1-naphthyl)-ethylenediamine dihydrochloride colorimetric method and were measured at a wavelength of 550 nm (4500-NO 2 - B and 4500-NO 3 - F, Standard Procedures for the Analysis of Water and Wastewater ). Calibration curves were prepared by serial dilution of nitrite-N (0.1–2.0 mg-N L −1 , Merck, Darmstadt, Germany) and nitrate-N (0.1–3.0 mg-N L −1 , Merck, Darmstadt, Germany). pH was determined using a pHmeter (pH–mV, Hanna Instruments, Inc., Woonsocket, RI, USA). The data are shown in .
For the preparation of the inoculum spiked in swine manure, standard strains of E. coli and S. enterica serovar Senftemberg were spread on nutrient agar (Kasvi ® ) and incubated at 37 °C for 24 h. Following this, batches of bacterial colonies were gradually added to 10 mL of a 0.9% saline solution until they reached turbidity comparable to the 0.5 McFarland standard (Remel ® ), which contains 1.5 × 10 8 bacteria per mL. This suspension was combined with swine manure and immediately applied to the soil. The volume of swine manure used in this study was comparable to that applied to corn, wheat, and soybean crops (50 m 3 ha −1 ) .
The sampled soil was deposited in 1 L reactors that were artificially contaminated with bacterial suspensions containing E. coli and S. senftemberg at concentrations comparable to the 0.5 McFarland standard (Remel ® ). Samples were collected at time zero (T0), daily, and every 5 days until all bacteria died. For E. coli quantification, samples were serially diluted at base 10, then placed at different depths in Chromocult ® Agar , and incubated at 37 °C for 24 h, and the count of typical colonies was determined according to the manufacturer’s instructions. To quantify S. senftemberg , the samples were serially diluted to base 10 in saline solution and placed on XLD Agar for 24 h incubation at 37 °C, followed by standard colony counting according to the manufacturer’s instructions. The results are represented as colony-forming units (CFU).
Three soil column reactors, 70 cm high and 30 cm in diameter, fabricated in polyvinyl chloride tubes (PVC tubes) were used in the experiment. On the side, 1 cm diameter access slots were made at depths of 10, 20, 40, and 60 cm, to allow the soil sample collection during the experiment. The soils were rearranged in the columns in the same order in which they were removed from the original place on the farm (up to 60 cm deep). The columns were left alone for a week to allow the soil to stabilize . Then, soils were fertilized with swine manure artificially contaminated with known concentrations of model bacteria. To monitor the percolation of microorganisms in the soil, 1 g soil samples were collected at different depths . Samples were collected regularly until all bacteria died.
To carry out the leaching experiments, after fertilization, the soil columns were exposed to a precipitation of 53 mm (at an environmental temperature of 20 °C). This experiment was conducted on a rainy day, representing the real conditions that occurring in the field. The rain volume was measured to calculate the precipitation. A tap was installed at the bottom to allow the leaching liquid to be collected. The leaching liquid from the soil was collected using a sterile collector tube, at times 2, 4, 8, 12, 24, 36, and 48 h after the rain and the enteric bacteria quantified .
The inactivation coefficient and the time required for a 1 Log 10 reduction of model bacteria (T 90 = 1/− k ) were calculated according to Ottoson et al. , considering the linear regression curve with r 2 ≥ 0.75.
T -test was used to evaluate the changes in enteric bacteria behavioral profiles in soil over time. One-way analysis of variance (ANOVA) was used to evaluate differences between the depths, using a 95% confidence level, followed by Bonferroni’s multiple comparison test (GraphPad Prism 5.0). The critical p -value for the test was set at ≤0.05.
3.1. Enterobacteriaceae Decay Profile in Soil The survival of pathogenic enterobacteria in clayey soil fertilized with swine manure spiked with E. coli and S. senftemberg is depicted in . After 7 days, the E. coli concentration decreased by 90% (1 log 10 ) and remained stable for 25 days; a significant decrease in E. coli concentration was observed after 43 days ( p < 0.05). A different response was found for S. senftemberg ( B), where it required 9 days to reduce the concentration by 90% (1 log 10 ). Additionally, 13 days were required for the elimination of S. senftemberg (10 4 CFU). It is worth noting that untreated swine manure can present an E. coli concentration of 10 7 MPN 100 mL −1 , so even after a 90% reduction, the bacteria load in manure remains high and is able to contaminate soil, where it can be active for more than 30 days. According to the World Health Organization, the recommendation for water reuse is an E. coli concentration lower than 10 3 MPN mL −1 for the fertigation of cultures not directly consumed . Previous studies conducted by our group showed that swine manure treatment consisting of anaerobic digestion followed by a pond system is suitable to remove pathogenic bacteria, leading to concentrations below 10 3 CFU . However, a low concentration of E. coli does not guarantee the absence of other pathogens, such as viral particles, which have environmental survival times greater than two months . Pathogen survival in environmental matrices is affected by factors such as climatic conditions, temperature, pH, agrochemicals, aeration, soil type, and the presence of other microorganisms (due to predation or competition) . Additionally, survival can be influenced by plants cultivated in the soil; Maule reported that the greatest survival of bacteria occurs in soil containing rooted grass. Similar results were observed after applying livestock manure to soil, where E. coli O157:H7 and Salmonella persisted in the soil for up to one month after its application to both sandy and clayey grassland soils . Studies on the soil application of swine manure revealed that after the 20th day, the quantity of bacteria decreased very slowly, independent of the amount of sludge used, such that after 80 days, an estimated concentration of 10 3 CFU dry matter −1 remained in the soil . The estimated average time required to obtain undetectable E. coli concentrations in sandy soil ranged from 56 to 70 days . E. coli O157:H7 continued to survive after 60 days in Brown soil sand and silts, with a decrease of 0.7 to 2.5 log 10 CFU g −1 . During the same period, the E. coli O157:H7 concentration in Brown soil clay containing natural organic matter increased by 0.58 log 10 CFU g −1 compared to the original inoculation (from 6.68 to 7.26 log 10 CFU g −1 ) . On the other hand, the concentration of E. coli O157:H7 in Brown clay without natural organic matter had been reduced to undetectable levels by day 24 . The clay concentration in soil has been recognized to have a significant impact on enterobacteria survival in soil, typically improving survival. Some of the most common clay minerals found in soils include kaolinite, montmorillonite, and illite . Brennan et al. studied the effect of clay mineral type on bacterial enterobacteria survival in soil. As a result, after 96 days of experimentation, the reduction in E. coli O157:H7 in the soil was 10 6 CFU g, whereas, with the addition of kaolinite, montmorillonite, and illite, the reduction was 10 4 , 10 3 , and 10 2 CFU g, respectively. Clay minerals constitute the most active inorganic colloid components in soils, influencing bacterial adhesion, metabolism, colonization, and biofilm formation . Clays with the highest surface areas and specific surface electrical characteristics were more efficient than silts and sands in attaching E. coli O157:H7 . The attachment of bacteria, the first step in biofilm formation, stimulates the organism to produce extracellular polymeric substances such as polysaccharides, proteins, lipids, and nucleic acids, which form a protective matrix around the bacterial surface and protect cells from adverse environmental conditions . In this respect, higher adhesion led to gradually longer E. coli O157:H7 survival in clay soil . Surface-attached bacteria may have a different physiological or metabolic state in terms of gene transcription for growth and metabolism, which increases the chances of microbial species establishing and persisting in difficult environments . 3.2. Decay Kinetics of Enterobacteriaceae in Soil Pathogens discharged with manure particles are exposed to various processes and routes that decide their die-off or growth, as well as their final deposition or fate . Nevertheless, to contaminate water resources and possibly infect humans or animals, a pathogen must be able to survive after fertilization and endure the processes it may face at the soil surface, during transit through the soil, or after entrainment in the overland flow . According to the findings in this study, S. senftenberg had a greater inactivation rate (0.096 d −1 ) compared to E. coli (0.1029 d −1 ) ( ). Additionally, for E. coli , a 90% reduction takes 9.71 days. S. senftemberg requires 10.4 days to be 90% inactivated (1 log 10 ). Similar T 90 values were obtained in sandy soils after swine digestate application for S. enterica Typhimurium (11.9 d) and E. coli O157:H7 (10.75 d) . The inactivation coefficient ( k ) can be influenced by enterobacteria-specific and clayey mineral properties, as shown by Brennan et al. . In this regard, E. coli O157:H7 exhibited k values of 0.30, 0.23, 0.15, and 0.06 in clayey soil (without mineral addition), a soil kaolinite mix, a soil illite mix, and a soil montmorillonite mix, respectively, whereas Salmonella Dublin exhibited k values of 0.30, 0.18, 0.20, and 0.05 in the clayey soil (without mineral addition), soil kaolinite mix, soil illite mix, and soil montmorillonite mix, respectively . 3.3. Percolation of Enterobacteriaceae in Soil As shown in A, E. coli was found up to a depth of 60 cm 48 h after swine manure application, most likely due to fertilizer drag. There was a significant reduction ( p < 0.05) in the first five days at soil depths of 10 cm and 20 cm. E. coli strains remained viable in the soil column, similar to the survival results depicted in A. S. senftemberg ( B) did not penetrate the deepest soil layers, reaching only a depth of 20 cm. There was a significant decrease in the S. senftemberg concentration in the layers of soil (10 and 20 cm) in the first 48 h and a reduction to zero by the 16th day after swine manure application ( p < 0.05). The movement of microorganisms in soil is influenced by intrinsic microbial features such as size, shape, cell surface characteristics, and biochemical and enzymatic properties . In this sense, the differences observed between the bacteria used in this study could be explained by the cell size, where Salmonella enterica is a rod-shaped bacteria ranging from 2.2 to 5.0 μm , while E. coli cells are smaller at 1–2 μm , with smaller cells percolating longer. The number and size of microbial cells impact the settling velocity of manure. Microorganisms have a low density in general; hence, they are likely to remain suspended once entrained . Suspended bacteria present in swine manure can travel quickly across the profiles of well-structured soils at moderate to high rates of water content through macropores and worm-holes. Any field soil that has macropores and receives enough water to fill these holes is likely to facilitate the fast transport of suspended bacteria to the depth at which these macropores are continuous . A sandy soil with wider pores will allow for easier passage through the soil matrix than a clayey soil with fewer pore spaces . Chemotactic migration permits motile bacteria to move more efficiently in response to environmental conditions (favorable or otherwise). They may also be capable of swimming toward soil pores and surface irregularities that would otherwise be inaccessible ; hence, their transport capability is increased. Others can use flagellar motion to move toward helpful substances such as nutrients, which promotes more mobility across the environmental medium . Members of the Pseudomonas , Achromobacter , Bacillus , Flavobacterium , and Enterobacter genera have exhibited different transport potentials . Sepehrnia et al. reported that E. coli cells are expected to be more influenced by hydrodynamic forces compared to smaller-sized bacteria . The adhesion of Salmonella to soil has been shown to be correlated with cell surface hydrophobicity . Huysman and Verstraete found that hydrophobic strains were 2–3 times slower to percolate through soil columns, as observed with the Salmonella in the present study. 3.4. Leaching of E. coli in Soil Rain can promote the survival of pathogenic bacteria by keeping the soil wet, and it can also move bacteria through the soil to more or less suitable areas, as well as potentially contaminate groundwater . shows the behavior of E. coli in clayey soil fertilized with swine manure exposed to rain. The samples obtained in this phase of the study were not from soil, but from the liquid fraction (leachate) that exceeded 60 cm of the soil column, simulating rains on swine-manure-fertilized soil. As a result, after 5 min of rain, approximately 10 3 CFU reached a depth of 60 cm, and after 48 h, all water had percolated and the total bacteria concentration was reduced. This result indicates that the bacteria leaching in the first 24 h and the water eliminated in the last 24 h correspond to the water retained in the soil particles. Furthermore, the use of liquid manure is predicted to improve microbial release and transport efficiency . Manure compounds in liquid-based materials are more quickly recoverable and more influenced by the impact of precipitation or the flow of water than solid-manure compounds, which are more aggregated (adhered to material surfaces) . Thus, since bacteria have greater mobility in the liquid phase than in the solid phase, liquid manure tends to be more uniformly polluted than solid manure . Other studies reported the depth-dependent survival of E. coli and enterococci in soil following manure application and simulated rainfall of 30, 60, and 90 mm. In the first few days, E. coli concentrations increased and then gradually decreased to the initial amount; however, enterococci populations decreased at the beginning and were inactivated after 4 weeks, except when 30 mm of rain was applied: in this condition, the survival was longer than the 21 days of the experiment . The bacterial activity decreases by one or two orders of magnitude for every 2 m of depth . All of these findings highlight the diverse behavior of microorganisms in soil, depending on the soil type, microbial strains, manure load, and environmental conditions such as rain volume. During the application of manure without rain, there is a long survival period, but not with a long spread; in rainy periods, vertical leaching occurs faster. In this context, farmers should be encouraged to use environmentally friendly agriculture and manure management practices. Given the diversity of agricultural conditions, such farm and manure management solutions should be adaptable and pragmatic in design. A comprehensive combination of tactics that considers geographical, environmental, sociocultural, and economic differences would be suitable. Farmers’ knowledge and understanding must be improved, particularly in rural regions. It is critical to emphasize the need to use effective manure treatments and avoid applying new/raw manure .
The survival of pathogenic enterobacteria in clayey soil fertilized with swine manure spiked with E. coli and S. senftemberg is depicted in . After 7 days, the E. coli concentration decreased by 90% (1 log 10 ) and remained stable for 25 days; a significant decrease in E. coli concentration was observed after 43 days ( p < 0.05). A different response was found for S. senftemberg ( B), where it required 9 days to reduce the concentration by 90% (1 log 10 ). Additionally, 13 days were required for the elimination of S. senftemberg (10 4 CFU). It is worth noting that untreated swine manure can present an E. coli concentration of 10 7 MPN 100 mL −1 , so even after a 90% reduction, the bacteria load in manure remains high and is able to contaminate soil, where it can be active for more than 30 days. According to the World Health Organization, the recommendation for water reuse is an E. coli concentration lower than 10 3 MPN mL −1 for the fertigation of cultures not directly consumed . Previous studies conducted by our group showed that swine manure treatment consisting of anaerobic digestion followed by a pond system is suitable to remove pathogenic bacteria, leading to concentrations below 10 3 CFU . However, a low concentration of E. coli does not guarantee the absence of other pathogens, such as viral particles, which have environmental survival times greater than two months . Pathogen survival in environmental matrices is affected by factors such as climatic conditions, temperature, pH, agrochemicals, aeration, soil type, and the presence of other microorganisms (due to predation or competition) . Additionally, survival can be influenced by plants cultivated in the soil; Maule reported that the greatest survival of bacteria occurs in soil containing rooted grass. Similar results were observed after applying livestock manure to soil, where E. coli O157:H7 and Salmonella persisted in the soil for up to one month after its application to both sandy and clayey grassland soils . Studies on the soil application of swine manure revealed that after the 20th day, the quantity of bacteria decreased very slowly, independent of the amount of sludge used, such that after 80 days, an estimated concentration of 10 3 CFU dry matter −1 remained in the soil . The estimated average time required to obtain undetectable E. coli concentrations in sandy soil ranged from 56 to 70 days . E. coli O157:H7 continued to survive after 60 days in Brown soil sand and silts, with a decrease of 0.7 to 2.5 log 10 CFU g −1 . During the same period, the E. coli O157:H7 concentration in Brown soil clay containing natural organic matter increased by 0.58 log 10 CFU g −1 compared to the original inoculation (from 6.68 to 7.26 log 10 CFU g −1 ) . On the other hand, the concentration of E. coli O157:H7 in Brown clay without natural organic matter had been reduced to undetectable levels by day 24 . The clay concentration in soil has been recognized to have a significant impact on enterobacteria survival in soil, typically improving survival. Some of the most common clay minerals found in soils include kaolinite, montmorillonite, and illite . Brennan et al. studied the effect of clay mineral type on bacterial enterobacteria survival in soil. As a result, after 96 days of experimentation, the reduction in E. coli O157:H7 in the soil was 10 6 CFU g, whereas, with the addition of kaolinite, montmorillonite, and illite, the reduction was 10 4 , 10 3 , and 10 2 CFU g, respectively. Clay minerals constitute the most active inorganic colloid components in soils, influencing bacterial adhesion, metabolism, colonization, and biofilm formation . Clays with the highest surface areas and specific surface electrical characteristics were more efficient than silts and sands in attaching E. coli O157:H7 . The attachment of bacteria, the first step in biofilm formation, stimulates the organism to produce extracellular polymeric substances such as polysaccharides, proteins, lipids, and nucleic acids, which form a protective matrix around the bacterial surface and protect cells from adverse environmental conditions . In this respect, higher adhesion led to gradually longer E. coli O157:H7 survival in clay soil . Surface-attached bacteria may have a different physiological or metabolic state in terms of gene transcription for growth and metabolism, which increases the chances of microbial species establishing and persisting in difficult environments .
Pathogens discharged with manure particles are exposed to various processes and routes that decide their die-off or growth, as well as their final deposition or fate . Nevertheless, to contaminate water resources and possibly infect humans or animals, a pathogen must be able to survive after fertilization and endure the processes it may face at the soil surface, during transit through the soil, or after entrainment in the overland flow . According to the findings in this study, S. senftenberg had a greater inactivation rate (0.096 d −1 ) compared to E. coli (0.1029 d −1 ) ( ). Additionally, for E. coli , a 90% reduction takes 9.71 days. S. senftemberg requires 10.4 days to be 90% inactivated (1 log 10 ). Similar T 90 values were obtained in sandy soils after swine digestate application for S. enterica Typhimurium (11.9 d) and E. coli O157:H7 (10.75 d) . The inactivation coefficient ( k ) can be influenced by enterobacteria-specific and clayey mineral properties, as shown by Brennan et al. . In this regard, E. coli O157:H7 exhibited k values of 0.30, 0.23, 0.15, and 0.06 in clayey soil (without mineral addition), a soil kaolinite mix, a soil illite mix, and a soil montmorillonite mix, respectively, whereas Salmonella Dublin exhibited k values of 0.30, 0.18, 0.20, and 0.05 in the clayey soil (without mineral addition), soil kaolinite mix, soil illite mix, and soil montmorillonite mix, respectively .
As shown in A, E. coli was found up to a depth of 60 cm 48 h after swine manure application, most likely due to fertilizer drag. There was a significant reduction ( p < 0.05) in the first five days at soil depths of 10 cm and 20 cm. E. coli strains remained viable in the soil column, similar to the survival results depicted in A. S. senftemberg ( B) did not penetrate the deepest soil layers, reaching only a depth of 20 cm. There was a significant decrease in the S. senftemberg concentration in the layers of soil (10 and 20 cm) in the first 48 h and a reduction to zero by the 16th day after swine manure application ( p < 0.05). The movement of microorganisms in soil is influenced by intrinsic microbial features such as size, shape, cell surface characteristics, and biochemical and enzymatic properties . In this sense, the differences observed between the bacteria used in this study could be explained by the cell size, where Salmonella enterica is a rod-shaped bacteria ranging from 2.2 to 5.0 μm , while E. coli cells are smaller at 1–2 μm , with smaller cells percolating longer. The number and size of microbial cells impact the settling velocity of manure. Microorganisms have a low density in general; hence, they are likely to remain suspended once entrained . Suspended bacteria present in swine manure can travel quickly across the profiles of well-structured soils at moderate to high rates of water content through macropores and worm-holes. Any field soil that has macropores and receives enough water to fill these holes is likely to facilitate the fast transport of suspended bacteria to the depth at which these macropores are continuous . A sandy soil with wider pores will allow for easier passage through the soil matrix than a clayey soil with fewer pore spaces . Chemotactic migration permits motile bacteria to move more efficiently in response to environmental conditions (favorable or otherwise). They may also be capable of swimming toward soil pores and surface irregularities that would otherwise be inaccessible ; hence, their transport capability is increased. Others can use flagellar motion to move toward helpful substances such as nutrients, which promotes more mobility across the environmental medium . Members of the Pseudomonas , Achromobacter , Bacillus , Flavobacterium , and Enterobacter genera have exhibited different transport potentials . Sepehrnia et al. reported that E. coli cells are expected to be more influenced by hydrodynamic forces compared to smaller-sized bacteria . The adhesion of Salmonella to soil has been shown to be correlated with cell surface hydrophobicity . Huysman and Verstraete found that hydrophobic strains were 2–3 times slower to percolate through soil columns, as observed with the Salmonella in the present study.
Rain can promote the survival of pathogenic bacteria by keeping the soil wet, and it can also move bacteria through the soil to more or less suitable areas, as well as potentially contaminate groundwater . shows the behavior of E. coli in clayey soil fertilized with swine manure exposed to rain. The samples obtained in this phase of the study were not from soil, but from the liquid fraction (leachate) that exceeded 60 cm of the soil column, simulating rains on swine-manure-fertilized soil. As a result, after 5 min of rain, approximately 10 3 CFU reached a depth of 60 cm, and after 48 h, all water had percolated and the total bacteria concentration was reduced. This result indicates that the bacteria leaching in the first 24 h and the water eliminated in the last 24 h correspond to the water retained in the soil particles. Furthermore, the use of liquid manure is predicted to improve microbial release and transport efficiency . Manure compounds in liquid-based materials are more quickly recoverable and more influenced by the impact of precipitation or the flow of water than solid-manure compounds, which are more aggregated (adhered to material surfaces) . Thus, since bacteria have greater mobility in the liquid phase than in the solid phase, liquid manure tends to be more uniformly polluted than solid manure . Other studies reported the depth-dependent survival of E. coli and enterococci in soil following manure application and simulated rainfall of 30, 60, and 90 mm. In the first few days, E. coli concentrations increased and then gradually decreased to the initial amount; however, enterococci populations decreased at the beginning and were inactivated after 4 weeks, except when 30 mm of rain was applied: in this condition, the survival was longer than the 21 days of the experiment . The bacterial activity decreases by one or two orders of magnitude for every 2 m of depth . All of these findings highlight the diverse behavior of microorganisms in soil, depending on the soil type, microbial strains, manure load, and environmental conditions such as rain volume. During the application of manure without rain, there is a long survival period, but not with a long spread; in rainy periods, vertical leaching occurs faster. In this context, farmers should be encouraged to use environmentally friendly agriculture and manure management practices. Given the diversity of agricultural conditions, such farm and manure management solutions should be adaptable and pragmatic in design. A comprehensive combination of tactics that considers geographical, environmental, sociocultural, and economic differences would be suitable. Farmers’ knowledge and understanding must be improved, particularly in rural regions. It is critical to emphasize the need to use effective manure treatments and avoid applying new/raw manure .
This work evaluates the behavior (survival and percolation) of E. coli and S. senftemberg in clayey soils fertilized with swine manure. The results indicate that E. coli survives for a longer period (43 days) than S. senftemberg (14 days); E. coli percolates quickly through the soil. During a rainy event (53 mm), E. coli percolated 60 cm in less than 5 min, and it was possible to find viable bacteria up to 24 h after the rain. The results show the importance of reducing enteric pathogens in animal manures before their field application, which is critical for lowering the risk of produce-related foodborne diseases. Considering the characteristics of swine-producing regions, the load of effluents applied to the soil may exceed the self-purification capacity of the environment, and percolation or surface runoff may occur, with the consequent contamination of water bodies by pathogens.
|
The Role of Title 1 Secondary School Athletic Trainers in the Primary and Patient-Centered Care of Low Socioeconomic Adolescents
|
0dad3bbb-69f1-4237-9b19-7b38a3024c21
|
10094508
|
Patient-Centered Care[mh]
|
Social determinants of health (SDoH) or the conditions wherein people are born, grow, live, learn, and work can significantly impact health outcomes. Individuals residing in low socioeconomic communities face barriers to overcoming these SDoH, particularly lack of health care access and quality . Adolescents from lower socioeconomic backgrounds have been shown to have inferior health outcomes . As many as one in eight adolescents lack a usual source for routine preventive care and as many as one in eleven adolescents report having no usual source of care when sick or injured . This is due in part to health inequities suffered by these individuals, such as lack of insurance, underinsurance, greater reliance on public insurance, and lesser access to healthcare providers . Heads of households report spending significant time searching for public health insurance , after which, when secured, policy holders report spending significant time searching for clinicians and facilities that accept their plans . Further surmounting these obstacles, there is currently a shortage of primary care providers (PCPs) that is expected to only grow through to the year 2030 . To address the challenges encountered by these medically underserved populations, creative use of healthcare resources will be required . Although athletic trainers (ATs) are commonly perceived to treat only sport-related injuries and conditions, they have continued to expand their practice to be well-suited for addressing this healthcare challenge . Several of the athletic training (AT) practice domains overlap with the family practice sub-competencies of the Accreditation Council for Graduate Medical Education (ACGME) . For example, the first domain of AT entails risk reduction, wellness, and health literacy, which aligns aptly with the ACGME’s third patient care competency of partnering with the patient, family, and community to improve health through disease prevention and health promotion . Likewise, the fourth athletic training domain of therapeutic intervention describes the AT’s ability to rehabilitate and recondition injuries as well as general medical conditions with the goal of achieving optimal activity levels . This domain uses similar language to the fifth ACGME patient care competency by denoting both the family practitioners’ ability to perform specialty procedures to meet the healthcare needs of patients, families, and communities as well as their knowledge about the procedures performed by other healthcare professionals to guide their patients’ care . Given this similarity, ATs should be viewed as a potential strategy for helping to alleviate the growing primary care shortage for adolescents in lower socioeconomic communities . While still considered a luxury in some regions of the country, AT services are generally free of charge if provided to student–athletes at secondary schools. Therefore, for adolescents from lower SES backgrounds who have limited–to-no access to quality health care, interactions with ATs through interscholastic activities may be one of the only regular encounters they have with a healthcare provider [ , , ]. This positions ATs ripely to serve as the first point of contact regarding general, non-orthopedic medical concerns and a vital bridge to the healthcare system for this vulnerable patient population . Secondary school ATs can use patient-centered strategies to guide student–athletes to school-based or free community health centers and serve as advocates for these services in the event that they do not exist . Therefore, the objective of this study was to describe the experiences of ATs providing primary care for adolescent student–athletes attending Title 1 secondary schools.
2.1. Research Design To gain an understanding of the participants’ experiences providing primary care in Title 1 schools, we used a qualitative design consisting of in-depth, virtual focus groups which conceptualized the use of an interpretative phenomenological analysis (IPA) research approach. The IPA approach was constructed from Guba’s Critical Theory Paradigm (1990) in addition to Burrell and Morgan’s (1979) for the purpose of determining the impact of a problem or issue on the ‘lived experience’ of the research participants . This study was approved for non-exempt human subject research by the Institutional Review Board (IRB) at Florida International University. 2.2. Instrumentation A semi-structured interview guide was developed and used to assist investigators in exploring the experiences of ATs during focus groups. This protocol, comprising 5 open-ended questions and additional follow-up questions, was used to gain more information regarding the context of the athletic trainers’ experiences. Content and design experts were used to review the interview protocol for content validity. The interview guide can be found in . 2.3. Participants and Procedures The use of an IPA design warranted a homogenous sample of approximately 12 participants with similar lived experiences . Therefore, initial recruitment began in March 2021 through invitation letters that were sent to a convenience sample of secondary school ATs. When the targeted number of participants was not obtained, a snowball sampling method utilized qualified participants to recruit additional study participants meeting the inclusion criteria through July 2021 until data saturation was reached ( n = 11). Participants were included if they were ATs practicing at Title 1 secondary schools. Title 1-A grants provide supplementary education and related services to schools, pre-kindergarten through to grade 12, with relatively high concentrations of students from low-income households . Invitations to participate which detailed the study’s purpose, design, total time commitment, and incentives for participation were sent to a convenience sample of participants via GroupMe (New York, NY, USA) and Twitter (San Francisco, CA, USA). The principal investigator (N.A.H.) collected names, the secondary schools of practice, and school-issued email addresses from ATs expressing interest in study participation. The inclusion criteria were verified by the principal investigator (N.A.H.) through communication via the ATs’ school-issued email addresses. Current Board of Certification, Inc. (BOC) certification was also verified using the BOC website. Likewise, Title 1 school status was verified by the principal investigator (N.A.H.) by using the National Center for Education Statistics website at nces.ed.gov (accessed on 12 July 2021). Eligible participants were emailed via their school-issued email addresses with details regarding participation. Recruitment emails also provided a link to complete a survey via Qualtrics (Provo, UT, USA) that gathered participants’ demographic information. Electronic informed consent was provided at the beginning of the survey. A Doodle (Zürich, Switzerland) link was also housed at the end of the survey for participants to indicate their preferred availability for focus group participation. All participants provided electronic informed consent before scheduling focus group participation. Once dates and assignments were confirmed, virtual focus group sessions were held via Zoom (San Jose, CA) on 3 separate dates dependent on the participants’ availability. Each focus group was limited to 4 participants. Participants were assigned to a waiting room upon entry, after which they were instructed to modify their name to a pseudonym of choice for the protection of privacy. The principal investigator (N.A.H.) and co-investigator (M.L.O.) served as session moderators who posed initial questions and prompted group discussion, modified questions to progress the conversation, and asked supplementary questions as needed. While participants answered each question individually, moderators encouraged interaction amongst the group to explore convergent and divergent perspectives. Each session was recorded with the verbal informed consent of participants via a transcription service embedded in the Zoom platform. 2.4. Data Analysis The responses from the survey were collected using Qualtrics. All collected data were downloaded and transferred to SPSS (version 26; IBM Corp, Armonk, NY, USA) for analysis of descriptive statistics. Counts and percentages were used to summarize participant demographics. The mean interview duration was approximately 50 min. Interview transcriptions were proofed for accuracy against the recording by a third-party researcher with competency in qualitative data collection and management. Data analysis was approached using a 7-step process, as outlined by Charlick et al. . The analysis process began by listening to audio in addition to reading and re-reading through the first transcript to gain a comprehensive understanding of the ATs’ experiences followed by a third reading which involved generating initial notes of free associations in the margins of the text. In vivo coding was used to place emphasis on the specific language used by the AT participants in support of the IPA design and the desire to report on the participants’ experiences. Portions of the transcripts and notes were analyzed for significant words and phrases used by the participants, and this language was developed into codes using spreadsheet software. Codes were then organized into initial emergent themes reflecting the meanings of the participants’ experiences. Researchers continued by seeking to identify connections between the themes. Honoring the individuality of each focus group, the analysis process was then repeated for the remaining two transcripts before patterns were identified across all three cases. The analysis was completed by the principal investigator (N.A.H.) and an external researcher, independently. When the transcript review was complete, the two analysts met as a team to discuss the results and reach a consensus on the identified themes. As a validity check, the principal investigator (N.A.H.) presented individual responses and common themes to participants as necessary to request any additions or deletions to improve the accuracy of the identified themes. Final themes were qualitatively summarized into results, and direct quotations were selected, as suggested by Pietkiewicz and Smith and Pringle et al. , to give depth to the findings. Data credibility was established through member checking, multiple-analyst triangulation, and a peer review.
To gain an understanding of the participants’ experiences providing primary care in Title 1 schools, we used a qualitative design consisting of in-depth, virtual focus groups which conceptualized the use of an interpretative phenomenological analysis (IPA) research approach. The IPA approach was constructed from Guba’s Critical Theory Paradigm (1990) in addition to Burrell and Morgan’s (1979) for the purpose of determining the impact of a problem or issue on the ‘lived experience’ of the research participants . This study was approved for non-exempt human subject research by the Institutional Review Board (IRB) at Florida International University.
A semi-structured interview guide was developed and used to assist investigators in exploring the experiences of ATs during focus groups. This protocol, comprising 5 open-ended questions and additional follow-up questions, was used to gain more information regarding the context of the athletic trainers’ experiences. Content and design experts were used to review the interview protocol for content validity. The interview guide can be found in .
The use of an IPA design warranted a homogenous sample of approximately 12 participants with similar lived experiences . Therefore, initial recruitment began in March 2021 through invitation letters that were sent to a convenience sample of secondary school ATs. When the targeted number of participants was not obtained, a snowball sampling method utilized qualified participants to recruit additional study participants meeting the inclusion criteria through July 2021 until data saturation was reached ( n = 11). Participants were included if they were ATs practicing at Title 1 secondary schools. Title 1-A grants provide supplementary education and related services to schools, pre-kindergarten through to grade 12, with relatively high concentrations of students from low-income households . Invitations to participate which detailed the study’s purpose, design, total time commitment, and incentives for participation were sent to a convenience sample of participants via GroupMe (New York, NY, USA) and Twitter (San Francisco, CA, USA). The principal investigator (N.A.H.) collected names, the secondary schools of practice, and school-issued email addresses from ATs expressing interest in study participation. The inclusion criteria were verified by the principal investigator (N.A.H.) through communication via the ATs’ school-issued email addresses. Current Board of Certification, Inc. (BOC) certification was also verified using the BOC website. Likewise, Title 1 school status was verified by the principal investigator (N.A.H.) by using the National Center for Education Statistics website at nces.ed.gov (accessed on 12 July 2021). Eligible participants were emailed via their school-issued email addresses with details regarding participation. Recruitment emails also provided a link to complete a survey via Qualtrics (Provo, UT, USA) that gathered participants’ demographic information. Electronic informed consent was provided at the beginning of the survey. A Doodle (Zürich, Switzerland) link was also housed at the end of the survey for participants to indicate their preferred availability for focus group participation. All participants provided electronic informed consent before scheduling focus group participation. Once dates and assignments were confirmed, virtual focus group sessions were held via Zoom (San Jose, CA) on 3 separate dates dependent on the participants’ availability. Each focus group was limited to 4 participants. Participants were assigned to a waiting room upon entry, after which they were instructed to modify their name to a pseudonym of choice for the protection of privacy. The principal investigator (N.A.H.) and co-investigator (M.L.O.) served as session moderators who posed initial questions and prompted group discussion, modified questions to progress the conversation, and asked supplementary questions as needed. While participants answered each question individually, moderators encouraged interaction amongst the group to explore convergent and divergent perspectives. Each session was recorded with the verbal informed consent of participants via a transcription service embedded in the Zoom platform.
The responses from the survey were collected using Qualtrics. All collected data were downloaded and transferred to SPSS (version 26; IBM Corp, Armonk, NY, USA) for analysis of descriptive statistics. Counts and percentages were used to summarize participant demographics. The mean interview duration was approximately 50 min. Interview transcriptions were proofed for accuracy against the recording by a third-party researcher with competency in qualitative data collection and management. Data analysis was approached using a 7-step process, as outlined by Charlick et al. . The analysis process began by listening to audio in addition to reading and re-reading through the first transcript to gain a comprehensive understanding of the ATs’ experiences followed by a third reading which involved generating initial notes of free associations in the margins of the text. In vivo coding was used to place emphasis on the specific language used by the AT participants in support of the IPA design and the desire to report on the participants’ experiences. Portions of the transcripts and notes were analyzed for significant words and phrases used by the participants, and this language was developed into codes using spreadsheet software. Codes were then organized into initial emergent themes reflecting the meanings of the participants’ experiences. Researchers continued by seeking to identify connections between the themes. Honoring the individuality of each focus group, the analysis process was then repeated for the remaining two transcripts before patterns were identified across all three cases. The analysis was completed by the principal investigator (N.A.H.) and an external researcher, independently. When the transcript review was complete, the two analysts met as a team to discuss the results and reach a consensus on the identified themes. As a validity check, the principal investigator (N.A.H.) presented individual responses and common themes to participants as necessary to request any additions or deletions to improve the accuracy of the identified themes. Final themes were qualitatively summarized into results, and direct quotations were selected, as suggested by Pietkiewicz and Smith and Pringle et al. , to give depth to the findings. Data credibility was established through member checking, multiple-analyst triangulation, and a peer review.
Data saturation was reached at eleven participants after a redundancy in responses occurred in the third focus group session. Participants were 72.7% ( n = 8) female and 27.3% ( n = 3) male with an average age of 34.0 +/− 10.8 years. Regarding race/ethnicity, participants were 45.5% white- non-Hispanic/Latino ( n = 5), 36.4.% Hispanic ( n = 4), 9.1% Black ( n = 1), and 9.1% ( n = 1) American Indian or Alaska Native. Participants averaged 10.5 +/−10.8 years of athletic training certification with 7.6 +/− 7.9 years of overall practice and 7.3 +/− 7.9 years of practice at a Title 1 school. The demographics of individual participants can be found in . 3.1. Experiences Providing Primary Care Services in Title 1 Secondary Schools Qualitative data from the focus groups revealed numerous SDoH affecting adolescent patients’ overall health, well-being, and qualities in addition to sometimes preventing or prolonging patients from receiving care for non-orthopedic health concerns ( ). These SDoH often affected participants’ practice as ATs. A distinctive and overarching theme emerged suggesting ATs in Title 1 schools internalized numerous roles related to helping their patients overcome SDoH, which served as a barrier to their access to quality healthcare. Additionally, key interrelated subthemes surfaced regarding the ATs’ roles including (1) role preparation, (2) role clarity, (3) facilitating patient-centered care, (4) limited integration of care, and (5) patient-centered strategies used to overcome access and quality barriers. 3.1.1. Role Preparation Participants described a range of educational experiences relevant to caring for non-orthopedic concerns. Of the 11 participants, half indicated their preparation for their role included some degree of formal general medicine coursework during their professional program or training. However, education was described as “broad”, “general”, and “non-specific”. The other participants drew knowledge from their prerequisite courses or sought out continuing education courses specific to primary care and non-orthopedic conditions that interested them. Several participants reported that the most valuable preparation for their role within their community was obtained informally through their interactions with physicians or other health providers. Likewise, and notably, two participants also discussed informal practice-based networks of AT colleagues. These networks shared issues (e.g., presentations of skin conditions) common to their community and patient population to exchange advice on how to proceed or navigate limited resources. Two others were employed through outreach (e.g., a physical therapy clinic) which offered regular professional development and annually reviewed skills such as auscultation. Of importance, two participants also reported no preparation or education relevant to primary care, although all noted the importance of this education to their daily practice and practice setting. 3.1.2. Role Clarity Participants self-identified with numerous roles associated with the primary care of their patients ( ). These roles included serving as a caregiver, care coordinator, advocate, and educator. In the caregiver role, participants often identified pre-existing health conditions or risk factors named by the patient’s parent or guardian and implemented preventive measures to protect the patient during sports participation. Likewise, ATs aimed to evaluate emergent health concerns or early warning signs of pathology, provide basic treatment, and initiate swift referrals. While ATs identified their role as a caregiver, all participants reported their primary role to be a navigator, or care coordinator, responsible for coordinating specialist care (e.g., dermatological, neurological, etc.), as they acknowledged more trained professionals or specialists were more likely to quickly address the root of the problem. When working as care coordinators, they collaborated with team physicians or school nurses to obtain needed care for non-orthopedic conditions. Participants also viewed themselves as patient advocates who ensured that patients and their families were able to navigate the healthcare system, obtain quality care, and understand the recommendations and information offered. Additionally important was the fact that ATs highlighted their roles as educators. When coordinating specialist care, participants also highlighted their desire to educate the parents and guardians of the patients and help them navigate the healthcare system. Moreover, as educators, they sought to provide their patients with advice on risk reduction and health promotion. While participants were able to clearly self-identify these roles, they noted others (e.g., administrators, coaches, parents, etc.) were often confused about their role and responsibilities in the management of non-orthopedic conditions ( ). Participants identified that often individuals had a lack of clarity and understanding regarding the role and scope of practice of athletic trainers. In some cases, this role confusion resulted in delayed or missed opportunities for intervention with health concerns and underutilization of AT services. New students often held pre-convinced notions regarding athletic trainers’ willingness to help or intervene based on their past experiences with ATs at other schools or sporting events, avoiding encounters due to the assumption that the AT would not be willing or able to help. Furthermore, participants reported that despite the school administrators’ role as supervisors to athletic trainers, some lacked knowledge and understanding of the ATs’ scope of practice. This confusion resulted in the athletic trainers’ expectations being shifted toward forward-facing tasks (e.g., providing water and taping ankles) rather than patient care. Lastly, participants acknowledged parents and families as the ultimate decision-makers in an adolescent’s healthcare. However, some participants reported missed or underutilized opportunities for patient care resulting from parents’ ignorance or confusion about the athletic trainer’s role. 3.1.3. Facilitating Patient-Centered Care Reflecting on their own experiences, participants highlighted how their unique role within the schools and communities facilitated patient-centered care ( ). Direct access to the patients influenced the relationships built regarding trust, communication, and coordination of care. Positive, trusting relationships between patients and ATs can facilitate conversations and early interventions in general health concerns. Each participant emphasized the trust, respect, and rapport they were able to build with patients and some with patients’ families and communities. This trusting relationship meant patients were comfortable raising concerns to their ATs, often before family, physicians, or even school nurses in some cases. Furthermore, regular observation of and communication with patients through their proximity to athletics put ATs in a unique position to understand the athletes’ overall health, individual characteristics of expression (e.g., pain tolerances), and their unique social conditions. Thus, when unusual or persistent symptoms arose, ATs could identify what necessitated urgent referral and help the patient navigate the needed care. Moreover, their regular encounters with patients inside and outside of the athletic training facility allowed them to carefully watch over patients. Additionally, participants felt their presence within the schools facilitated the coordination and integration of care with school counselors, nurses, and coaches to monitor student–athletes’ well-being, ensure basic needs were being met, and offer resources such as food or transportation. Collaboration also included the patients, as ATs emphasized their educational roles in health promotion, literacy, and navigation. Finally, the context of a student–athlete’s personal motivation to continue in a sport facilitates an exceptional adherence to the athletic trainer’s recommendations. 3.1.4. Limited Integration of Care Each participant reported referring patients to external providers. However, all participants also discussed obstacles to collaboration and communication among providers. Integration of care was restricted by numerous factors. One barrier was the high turnover of primary care physicians, which inhibited relationship building and ATs’ awareness of physicians’ accessibility to patients. A second barrier was the limited dedicated time in their workday to build relationships with physicians outside of school. A third barrier was the difficulty of navigating and staying up to date with healthcare resources (e.g., physicians who accept a variety of insurance plans, food distribution centers, and counseling) dispersed throughout a large school district or metro/rural area. A fourth barrier was the limited availability of specialty providers (e.g., dermatologists) despite the frequent occurrence of some non-orthopedic conditions. 3.1.5. Patient-Centered Strategies to Overcome Access and Quality Barriers Finally, some participants shared strategies to overcome patient- and community-specific barriers and support students’ access to primary care. Utilizing students’ sports physical forms was recommended as a mechanism to quickly identify active community-based primary care physicians who are accessible to their student–athletes (e.g., using a variety of insurance plans) and who are knowledgeable about their SDoH. Participants discussed proactively reaching out to these physicians to build a relationship for future referrals. Another strategy suggested schools or organizations employing ATs maintain a master list of area physicians in non-orthopedic specialties, like the initiatives used for referrals to orthopedic specialists. Lastly, participants advocated for the use of creative ways to work around the space constraints of their facilities, which prevent privacy, so that students feel comfortable to speak openly regarding SDoH or needed resources. Strategies include making themselves available for discussion during times when the athletic training facility is less busy or when walking to and from facilities or venues, and, additionally, providing students with discrete mechanisms (e.g., leaving a note on their desk) expressing a concern or desire to speak privately.
Qualitative data from the focus groups revealed numerous SDoH affecting adolescent patients’ overall health, well-being, and qualities in addition to sometimes preventing or prolonging patients from receiving care for non-orthopedic health concerns ( ). These SDoH often affected participants’ practice as ATs. A distinctive and overarching theme emerged suggesting ATs in Title 1 schools internalized numerous roles related to helping their patients overcome SDoH, which served as a barrier to their access to quality healthcare. Additionally, key interrelated subthemes surfaced regarding the ATs’ roles including (1) role preparation, (2) role clarity, (3) facilitating patient-centered care, (4) limited integration of care, and (5) patient-centered strategies used to overcome access and quality barriers. 3.1.1. Role Preparation Participants described a range of educational experiences relevant to caring for non-orthopedic concerns. Of the 11 participants, half indicated their preparation for their role included some degree of formal general medicine coursework during their professional program or training. However, education was described as “broad”, “general”, and “non-specific”. The other participants drew knowledge from their prerequisite courses or sought out continuing education courses specific to primary care and non-orthopedic conditions that interested them. Several participants reported that the most valuable preparation for their role within their community was obtained informally through their interactions with physicians or other health providers. Likewise, and notably, two participants also discussed informal practice-based networks of AT colleagues. These networks shared issues (e.g., presentations of skin conditions) common to their community and patient population to exchange advice on how to proceed or navigate limited resources. Two others were employed through outreach (e.g., a physical therapy clinic) which offered regular professional development and annually reviewed skills such as auscultation. Of importance, two participants also reported no preparation or education relevant to primary care, although all noted the importance of this education to their daily practice and practice setting. 3.1.2. Role Clarity Participants self-identified with numerous roles associated with the primary care of their patients ( ). These roles included serving as a caregiver, care coordinator, advocate, and educator. In the caregiver role, participants often identified pre-existing health conditions or risk factors named by the patient’s parent or guardian and implemented preventive measures to protect the patient during sports participation. Likewise, ATs aimed to evaluate emergent health concerns or early warning signs of pathology, provide basic treatment, and initiate swift referrals. While ATs identified their role as a caregiver, all participants reported their primary role to be a navigator, or care coordinator, responsible for coordinating specialist care (e.g., dermatological, neurological, etc.), as they acknowledged more trained professionals or specialists were more likely to quickly address the root of the problem. When working as care coordinators, they collaborated with team physicians or school nurses to obtain needed care for non-orthopedic conditions. Participants also viewed themselves as patient advocates who ensured that patients and their families were able to navigate the healthcare system, obtain quality care, and understand the recommendations and information offered. Additionally important was the fact that ATs highlighted their roles as educators. When coordinating specialist care, participants also highlighted their desire to educate the parents and guardians of the patients and help them navigate the healthcare system. Moreover, as educators, they sought to provide their patients with advice on risk reduction and health promotion. While participants were able to clearly self-identify these roles, they noted others (e.g., administrators, coaches, parents, etc.) were often confused about their role and responsibilities in the management of non-orthopedic conditions ( ). Participants identified that often individuals had a lack of clarity and understanding regarding the role and scope of practice of athletic trainers. In some cases, this role confusion resulted in delayed or missed opportunities for intervention with health concerns and underutilization of AT services. New students often held pre-convinced notions regarding athletic trainers’ willingness to help or intervene based on their past experiences with ATs at other schools or sporting events, avoiding encounters due to the assumption that the AT would not be willing or able to help. Furthermore, participants reported that despite the school administrators’ role as supervisors to athletic trainers, some lacked knowledge and understanding of the ATs’ scope of practice. This confusion resulted in the athletic trainers’ expectations being shifted toward forward-facing tasks (e.g., providing water and taping ankles) rather than patient care. Lastly, participants acknowledged parents and families as the ultimate decision-makers in an adolescent’s healthcare. However, some participants reported missed or underutilized opportunities for patient care resulting from parents’ ignorance or confusion about the athletic trainer’s role. 3.1.3. Facilitating Patient-Centered Care Reflecting on their own experiences, participants highlighted how their unique role within the schools and communities facilitated patient-centered care ( ). Direct access to the patients influenced the relationships built regarding trust, communication, and coordination of care. Positive, trusting relationships between patients and ATs can facilitate conversations and early interventions in general health concerns. Each participant emphasized the trust, respect, and rapport they were able to build with patients and some with patients’ families and communities. This trusting relationship meant patients were comfortable raising concerns to their ATs, often before family, physicians, or even school nurses in some cases. Furthermore, regular observation of and communication with patients through their proximity to athletics put ATs in a unique position to understand the athletes’ overall health, individual characteristics of expression (e.g., pain tolerances), and their unique social conditions. Thus, when unusual or persistent symptoms arose, ATs could identify what necessitated urgent referral and help the patient navigate the needed care. Moreover, their regular encounters with patients inside and outside of the athletic training facility allowed them to carefully watch over patients. Additionally, participants felt their presence within the schools facilitated the coordination and integration of care with school counselors, nurses, and coaches to monitor student–athletes’ well-being, ensure basic needs were being met, and offer resources such as food or transportation. Collaboration also included the patients, as ATs emphasized their educational roles in health promotion, literacy, and navigation. Finally, the context of a student–athlete’s personal motivation to continue in a sport facilitates an exceptional adherence to the athletic trainer’s recommendations. 3.1.4. Limited Integration of Care Each participant reported referring patients to external providers. However, all participants also discussed obstacles to collaboration and communication among providers. Integration of care was restricted by numerous factors. One barrier was the high turnover of primary care physicians, which inhibited relationship building and ATs’ awareness of physicians’ accessibility to patients. A second barrier was the limited dedicated time in their workday to build relationships with physicians outside of school. A third barrier was the difficulty of navigating and staying up to date with healthcare resources (e.g., physicians who accept a variety of insurance plans, food distribution centers, and counseling) dispersed throughout a large school district or metro/rural area. A fourth barrier was the limited availability of specialty providers (e.g., dermatologists) despite the frequent occurrence of some non-orthopedic conditions. 3.1.5. Patient-Centered Strategies to Overcome Access and Quality Barriers Finally, some participants shared strategies to overcome patient- and community-specific barriers and support students’ access to primary care. Utilizing students’ sports physical forms was recommended as a mechanism to quickly identify active community-based primary care physicians who are accessible to their student–athletes (e.g., using a variety of insurance plans) and who are knowledgeable about their SDoH. Participants discussed proactively reaching out to these physicians to build a relationship for future referrals. Another strategy suggested schools or organizations employing ATs maintain a master list of area physicians in non-orthopedic specialties, like the initiatives used for referrals to orthopedic specialists. Lastly, participants advocated for the use of creative ways to work around the space constraints of their facilities, which prevent privacy, so that students feel comfortable to speak openly regarding SDoH or needed resources. Strategies include making themselves available for discussion during times when the athletic training facility is less busy or when walking to and from facilities or venues, and, additionally, providing students with discrete mechanisms (e.g., leaving a note on their desk) expressing a concern or desire to speak privately.
Participants described a range of educational experiences relevant to caring for non-orthopedic concerns. Of the 11 participants, half indicated their preparation for their role included some degree of formal general medicine coursework during their professional program or training. However, education was described as “broad”, “general”, and “non-specific”. The other participants drew knowledge from their prerequisite courses or sought out continuing education courses specific to primary care and non-orthopedic conditions that interested them. Several participants reported that the most valuable preparation for their role within their community was obtained informally through their interactions with physicians or other health providers. Likewise, and notably, two participants also discussed informal practice-based networks of AT colleagues. These networks shared issues (e.g., presentations of skin conditions) common to their community and patient population to exchange advice on how to proceed or navigate limited resources. Two others were employed through outreach (e.g., a physical therapy clinic) which offered regular professional development and annually reviewed skills such as auscultation. Of importance, two participants also reported no preparation or education relevant to primary care, although all noted the importance of this education to their daily practice and practice setting.
Participants self-identified with numerous roles associated with the primary care of their patients ( ). These roles included serving as a caregiver, care coordinator, advocate, and educator. In the caregiver role, participants often identified pre-existing health conditions or risk factors named by the patient’s parent or guardian and implemented preventive measures to protect the patient during sports participation. Likewise, ATs aimed to evaluate emergent health concerns or early warning signs of pathology, provide basic treatment, and initiate swift referrals. While ATs identified their role as a caregiver, all participants reported their primary role to be a navigator, or care coordinator, responsible for coordinating specialist care (e.g., dermatological, neurological, etc.), as they acknowledged more trained professionals or specialists were more likely to quickly address the root of the problem. When working as care coordinators, they collaborated with team physicians or school nurses to obtain needed care for non-orthopedic conditions. Participants also viewed themselves as patient advocates who ensured that patients and their families were able to navigate the healthcare system, obtain quality care, and understand the recommendations and information offered. Additionally important was the fact that ATs highlighted their roles as educators. When coordinating specialist care, participants also highlighted their desire to educate the parents and guardians of the patients and help them navigate the healthcare system. Moreover, as educators, they sought to provide their patients with advice on risk reduction and health promotion. While participants were able to clearly self-identify these roles, they noted others (e.g., administrators, coaches, parents, etc.) were often confused about their role and responsibilities in the management of non-orthopedic conditions ( ). Participants identified that often individuals had a lack of clarity and understanding regarding the role and scope of practice of athletic trainers. In some cases, this role confusion resulted in delayed or missed opportunities for intervention with health concerns and underutilization of AT services. New students often held pre-convinced notions regarding athletic trainers’ willingness to help or intervene based on their past experiences with ATs at other schools or sporting events, avoiding encounters due to the assumption that the AT would not be willing or able to help. Furthermore, participants reported that despite the school administrators’ role as supervisors to athletic trainers, some lacked knowledge and understanding of the ATs’ scope of practice. This confusion resulted in the athletic trainers’ expectations being shifted toward forward-facing tasks (e.g., providing water and taping ankles) rather than patient care. Lastly, participants acknowledged parents and families as the ultimate decision-makers in an adolescent’s healthcare. However, some participants reported missed or underutilized opportunities for patient care resulting from parents’ ignorance or confusion about the athletic trainer’s role.
Reflecting on their own experiences, participants highlighted how their unique role within the schools and communities facilitated patient-centered care ( ). Direct access to the patients influenced the relationships built regarding trust, communication, and coordination of care. Positive, trusting relationships between patients and ATs can facilitate conversations and early interventions in general health concerns. Each participant emphasized the trust, respect, and rapport they were able to build with patients and some with patients’ families and communities. This trusting relationship meant patients were comfortable raising concerns to their ATs, often before family, physicians, or even school nurses in some cases. Furthermore, regular observation of and communication with patients through their proximity to athletics put ATs in a unique position to understand the athletes’ overall health, individual characteristics of expression (e.g., pain tolerances), and their unique social conditions. Thus, when unusual or persistent symptoms arose, ATs could identify what necessitated urgent referral and help the patient navigate the needed care. Moreover, their regular encounters with patients inside and outside of the athletic training facility allowed them to carefully watch over patients. Additionally, participants felt their presence within the schools facilitated the coordination and integration of care with school counselors, nurses, and coaches to monitor student–athletes’ well-being, ensure basic needs were being met, and offer resources such as food or transportation. Collaboration also included the patients, as ATs emphasized their educational roles in health promotion, literacy, and navigation. Finally, the context of a student–athlete’s personal motivation to continue in a sport facilitates an exceptional adherence to the athletic trainer’s recommendations.
Each participant reported referring patients to external providers. However, all participants also discussed obstacles to collaboration and communication among providers. Integration of care was restricted by numerous factors. One barrier was the high turnover of primary care physicians, which inhibited relationship building and ATs’ awareness of physicians’ accessibility to patients. A second barrier was the limited dedicated time in their workday to build relationships with physicians outside of school. A third barrier was the difficulty of navigating and staying up to date with healthcare resources (e.g., physicians who accept a variety of insurance plans, food distribution centers, and counseling) dispersed throughout a large school district or metro/rural area. A fourth barrier was the limited availability of specialty providers (e.g., dermatologists) despite the frequent occurrence of some non-orthopedic conditions.
Finally, some participants shared strategies to overcome patient- and community-specific barriers and support students’ access to primary care. Utilizing students’ sports physical forms was recommended as a mechanism to quickly identify active community-based primary care physicians who are accessible to their student–athletes (e.g., using a variety of insurance plans) and who are knowledgeable about their SDoH. Participants discussed proactively reaching out to these physicians to build a relationship for future referrals. Another strategy suggested schools or organizations employing ATs maintain a master list of area physicians in non-orthopedic specialties, like the initiatives used for referrals to orthopedic specialists. Lastly, participants advocated for the use of creative ways to work around the space constraints of their facilities, which prevent privacy, so that students feel comfortable to speak openly regarding SDoH or needed resources. Strategies include making themselves available for discussion during times when the athletic training facility is less busy or when walking to and from facilities or venues, and, additionally, providing students with discrete mechanisms (e.g., leaving a note on their desk) expressing a concern or desire to speak privately.
Adolescents in low socioeconomic communities suffer from SDoH that serve as barriers to their healthcare access and quality. Thankfully, the presence of ATs in Title 1 secondary schools appears to be a protective factor for the student–athletes they serve. Athletic training services that are provided could lead to a reduction in health disparities for these adolescents that are often related to social determinants of health . Therefore, ATs working in low socioeconomic communities should be prepared to ease the burden of barriers for adolescent student–athletes. The primary aim of this study was to describe the experiences of athletic trainers providing primary care services in low socioeconomic communities. Interesting and valuable findings emerged from the lucid reports of the participants. Although secondary school ATs largely focus on the management of sport-related injuries, this study demonstrated that they are, in fact, uniquely positioned to encounter general medical conditions as well. Athletic trainers in this study encountered various non-orthopedic conditions in the secondary school setting, most notably neurological and psychological conditions, which aligns with the recent foci of educational efforts related to the management of sport-related concussion and mental health in athletic healthcare. Patient encounters arising from other body systems were sparse; however, they were still reported by participants. We have reason to believe the ATs in our sample may have granted less recognition to the care they provide for non-orthopedic conditions and therefore underreported these patient encounters. For example, only two participants spoke about encounters arising from the respiratory system (i.e., asthma) even though participants were interviewed during the COVID-19 pandemic. Winkelmann and Games reported that 28.2% of 611 ATs surveyed engaged in front-line screening or provided other AT services directly related to COVID-19. Screening activities can be rightfully classified as assessment, evaluation, or diagnosis, or, at minimum, risk reduction, and health promotion practices intended to avoid the spread of a respiratory illness. This suggests our AT participants may fail to classify these non-orthopedic services as general medicine or primary care. To the same point, when asked about the general medical conditions they encounter in their practice, no participants reported conditions arising from the genitourinary and gynecological systems. However, several ATs included within our sample described vivid lived experiences involving consultations with their student–athletes regarding reproductive health including menstrual cycle tracking and contraception as well as gender-affirming care. This further reinforces our suspicion that ATs may overlook these vital consultations as general medical services. While these findings may not be strong enough to extend to all Title 1 secondary school ATs or ATs as a whole, they warrant further investigation into how ATs define primary care and general medical conditions. Even when non-orthopedic conditions are uncommon, ATs, particularly ones in low socioeconomic communities, play an integral part in inclusive healthcare not only in the management of these conditions but in the wellness practices and health literacy that student–athletes develop during their adolescent years. Being able to articulate the care provided for non-orthopedic conditions can not only improve ATs’ communication with external healthcare providers both in conversation as well as in appropriate medical documentation but can also give those external health care providers more confidence in viewing ATs as healthcare providers. Education plays a vital role in preparing ATs for the roles and responsibilities of patient-centered care. The findings of this study suggested ATs sought education regarding general medical conditions through both formal and informal mechanisms. Many ATs received formal education regarding general medical conditions in their professional education, while others sought out the information in mandatory or voluntary continuing education sessions. We highlight the value of knowledge ATs obtained from their local physicians and within practice-based networks of colleagues. However, a lack of comprehensive formal education may result in a lack of confidence in managing these conditions. The 2020 CAATE Standards for Professional Programs require that students receive didactic and/or clinical education regarding medical conditions originating from all major body systems. While the curricular content is being provided to the students, Bacon et al. reported that as little as 3% of patient encounters recorded by AT students in clinical education were for non-orthopedic diagnoses. Limited clinical experience combined with knowledge decay occurring because of infrequent non-orthopedic encounters may lead ATs to exhibit deceased confidence when evaluating and treating these conditions. Acknowledging this deficiency, we believe there is a significant and obligatory need for continuing education in primary care that educates practicing ATs regarding general medical conditions, the best practices for the management of these conditions, in addition to the development and utilization of interprofessional relationships, which can help overcome access barriers to efficiency and effectively help lead adolescents of all backgrounds to quality healthcare services. Regardless of their education, experience level, and confidence, the ATs who were sampled felt they served in numerous roles that facilitated student–athletes’ overall health and well-being. Sociopsychological theories of personality related to self-identification suggest that individuals select and pursue goals in a way that supports or enriches the identities to which they are committed . These roles included serving as a caregiver who triaged emergent conditions and initiated referrals, provided treatment within their scope, and offered social support, particularly for those with reduced healthcare access; a care coordinator who helped guide patients through the healthcare system; an educator who provided knowledge regarding risk reduction and health promotion; and an advocate who ensured needed resources and care were obtained. While ATs were at least somewhat confident in the evaluation and treatment skills required to manage non-orthopedic conditions, the findings of this study suggest Title 1 ATs are most comfortable coordinating care for patients and advocating on behalf of their specific healthcare needs. Participants largely agreed that even in their caregiver role, their responsibilities lay not in the recognition and management of non-orthopedic conditions affecting the student–athletes’ overall health but in the coordination of care. In the care coordinator role, the ATs worked with patients and their families to understand their insurance coverage, find care providers who were accessible (e.g., translation services if needed, in a good location, and with convenient hours), and arrange their transportation. Athletic trainers reported using an interprofessional and collaborative approach, working with school counselors, nurses, and coaches to monitor student–athletes’ well-being and offer resources as necessary. Care coordination was viewed as the most important role, particularly when working with students and families with poor health literacy and/or inexperience with the healthcare system. Care models for these individuals should be centered around the patients and their individual circumstances. To provide patient-centered care, there must be continuity and integration of care between primary and specialty providers. Social barriers exist which complicate care coordination and prohibit adolescents in low socioeconomic communities from receiving timely and effective primary care for non-orthopedic conditions. Athletic trainers at Title 1 schools were able to mitigate some access barriers for students–athletes attending their schools. Access to an AT alone reduces the negative effect of a lack of transportation on a patient’s health by reducing the need to seek healthcare outside of a school environment . Furthermore, the unique position of ATs within schools and their athletic programs facilitated strong, trusting relationships to be established. Trust and respect are values inherent to quality, patient-centered health care. These relationships enabled easy reporting of orthopedic and non-orthopedic conditions by students as well as early recognition of general medical concerns by healthcare providers. Furthermore, ATs reported playing a crucial role in providing social support to their patients. Clement et al. studied injured athletes’ perceptions of social support from peers, coaches, and ATs and found that social support from ATs had a significant effect on overall health and well-being. Therefore, building positive, supportive relationships with patients may help ATs promote health in their patients. Although ATs largely focused on identifying orthopedic and non-orthopedic conditions that served as barriers to students’ athletic performances, their positions allowed them to distinguish more comprehensive conditions that spanned multiple body systems. Athletic trainers described both a desire and ability to mitigate some SDoH by managing certain general medical conditions in-house. In this sample, ATs were most commonly able to circumvent transportation and insurance barriers by performing post-operative rehabilitation in-house or using their trusting relationships to initiate initial conversations regarding food insecurity or mental health care. However, numerous barriers still endured when student–athletes needed specialty care outside of the ATs’ scope of practice. Despite changes resulting from the Affordable Care Act, nearly all ATs in this study cited insurance as the most challenging of these barriers. While all schools in this study required student–athletes to purchase an insurance policy, this policy could be utilized only for injuries and conditions resulting from their participation in sports, therefore failing to facilitate care for acute, underlying, or chronic conditions. Thus, ATs were forced to rely on other mechanisms such as free or community-based clinics to enable athletes to be seen by physicians. While tapping into their networks and their local resources was typically successful at helping patients to obtain needed services, the ATs’ roles in care coordination were acknowledged as difficult and time-consuming. The high turnover of primary care physicians within their communities inhibited relationship building and ATs’ awareness of physicians’ accessibility to student–athletes. Likewise, ATs felt they had limited time available to build relationships with physicians or other healthcare providers outside of school. Lastly, navigating and staying up to date with healthcare resources (e.g., physicians who accept particular insurance policies, procedures for obtaining translation or transportation resources, etc.) provided further frustration and time loss. Approximately 80% of health outcomes are determined by factors other than medical care; therefore, it is important to be aware of how SDoH can positively or negatively contribute to the overall health of patients . Furthermore, this information can be used to inform the best allocation of time, resources, and education in addressing non-orthopedic and orthopedic health matters. The presence of ATs in secondary schools may be a protective factor for the populations they serve and could lead to a reduction in health disparities that are often related to SDoH, such as income and access to care. Thus, efforts should be made to ensure ATs are provided to student–athletes in low socioeconomic communities, that these ATs are trained to practice at the top of their skillset, and that they are connected to a network of other healthcare providers for practice support and integration of care. Limitations and Future Research Despite education on general medical conditions, we believe the participants from our sample did not fully understand the definition of primary care and as a result, may have underreported and undervalued important healthcare services in their qualitative reports. Thus, the frequency of non-orthopedic conditions encountered by ATs in this study may not accurately reflect the true rate of occurrence for these conditions. Additionally, we accept a missed opportunity to operationally define “primary care” and “general medical conditions” in the recruitment process and prior to the data collection. While focus groups reached data saturation after 11 participants, using a convenience sample of participants in addition to word-of-mouth advertising presents concerns that the sample included was not fully representative of the population being studied. Thus, generalizations from this sample to all Title 1 secondary school ATs or all secondary school ATs, or ATs as a whole, should be applied with caution. Future research should aim to determine how ATs define primary care and general medical conditions within their clinical practice. Furthermore, future studies should collect data regarding the frequency of non-orthopedic encounters in the secondary school setting and the relationship of that frequency to ATs’ confidence in managing general medical conditions as well as serving as primary care providers. Frequency data may serve as valuable in supporting the need for ATs in secondary schools. Additionally, ATs’ confidence in evaluating and treating these conditions should be assessed in an effort to identify continuing education opportunities for ATs regarding primary care conditions or specific body systems.
Despite education on general medical conditions, we believe the participants from our sample did not fully understand the definition of primary care and as a result, may have underreported and undervalued important healthcare services in their qualitative reports. Thus, the frequency of non-orthopedic conditions encountered by ATs in this study may not accurately reflect the true rate of occurrence for these conditions. Additionally, we accept a missed opportunity to operationally define “primary care” and “general medical conditions” in the recruitment process and prior to the data collection. While focus groups reached data saturation after 11 participants, using a convenience sample of participants in addition to word-of-mouth advertising presents concerns that the sample included was not fully representative of the population being studied. Thus, generalizations from this sample to all Title 1 secondary school ATs or all secondary school ATs, or ATs as a whole, should be applied with caution. Future research should aim to determine how ATs define primary care and general medical conditions within their clinical practice. Furthermore, future studies should collect data regarding the frequency of non-orthopedic encounters in the secondary school setting and the relationship of that frequency to ATs’ confidence in managing general medical conditions as well as serving as primary care providers. Frequency data may serve as valuable in supporting the need for ATs in secondary schools. Additionally, ATs’ confidence in evaluating and treating these conditions should be assessed in an effort to identify continuing education opportunities for ATs regarding primary care conditions or specific body systems.
A greater understanding of primary care needs for adolescents in low socioeconomic communities requires the detection of their SDoH. Athletic trainers working at Title I secondary schools need to be aware of the social determinants affecting their student–athletes and the ability of those social determinants to affect overall student–athlete health and well-being. Because of their unique accessibility, Title 1 secondary school ATs are called upon to evaluate and treat student–athletes with general medical conditions. Often ATs need to refer these conditions to an appropriate healthcare provider when deemed outside of their scope of practice or confidence level. However, Title 1 ATs run into numerous, complex SDoH preventing efficient and effective referral to specialty health care providers. Thus, ATs ultimately felt their most important roles in primary care were as caregivers who mitigated avoidable barriers (e.g., insurance and transportation) by providing services in-house or within their referral network and care coordinators, which assisted student–athletes and their families with navigating the healthcare system (e.g., insurance, translation, etc.).
|
The Succession of the Cellulolytic Microbial Community from the Soil during Oat Straw Decomposition
|
8cdbbb56-411b-406a-98e2-2ddfcfeaf2f5
|
10094526
|
Microbiology[mh]
|
In agriculture, the production of grain is accompanied by the production of straw, whose yield surpasses the target product [ , , ]. There are ways of handling excessive straw quantities, differing in their economic and labor costs. One of the most cost-effective ways of utilizing excessive straw is burning, but it wastes potentially valuable resources and results in severe environmental consequences, including gas emissions and the negative impact of heat on soil fertility . Other ways of straw usage include biofuel production , the investigation of which is a promising research direction. However, it requires straw transportation, which induces extra costs. So, processing straw at the origin site can be a solution to multiple problems. The reintroduction of straw into the field solves both the problems of transportation costs and nutrient loss. It prevents soil erosion and involves plant residues in the global carbon cycle . However, this method has some disadvantages to overcome. Straw provides some easily digestible carbohydrates, proteins, lipids, and minerals, but it mostly consists of recalcitrant lignocellulose. Additionally, the introduction of bare straw into the soil shifts the ratio of carbon to nitrogen, which must be compensated for its effective assimilation by microorganisms. Thus, the search for ways of the effective processing of straw is still an acute problem for agriculture. Since straw is a complex raw substrate, its decomposition requires the work of multiple enzyme systems, found in a variety of bacteria and fungi. Cellulose, as the main component of straw, is decomposed by enzymes, most of which are listed in the Carbohydrate-Active EnZymes database (CAZy) . The biggest class of enzymes in CAZy comprises Glycoside hydrolases (GH), which are currently divided into 173 families based on the amino acid sequence similarity . GH class encompasses enzymes, aimed at the glycosidic bond between carbohydrates or a carbohydrate and a non-carbohydrate moiety . Consequently, cellulose decomposition is carried out by multiple, but not all, enzyme families across the GH classes. Different families include enzymes aimed mainly at the β-1,4 links in the polysaccharide chain of the recalcitrant cellulose (β-glucosidases, exo-β-glucanases, and endo-β-glucanases) and hemicellulose molecules (β-xylosidase, β-mannanase; β-mannosidase, β-xylanase, etc.), gradually breaking ithem into more accessible compounds . The main families containing these enzymes are GH1, GH3, GH5, GH6, GH7, GH9, GH10, GH30, GH43, and others . In natural habitats, these enzyme systems are distributed among different members of the microbial community . Understanding the principles of formation and functioning of the cellulolytic microbial consortium is essential knowledge for the formulation of highly effective preparations for straw decomposition. Soil from different environments can serve as a source of cellulolytic microorganisms. A number of studies focused on the isolation of single strains from various soil types [ , , , , ], but this approach has several flaws. It was reported that cellulolytic bacteria may take up to a fifth of the total soil community . Additionally, it has been shown that many families of enzymes are simultaneously involved in the decomposition of straw, and different functions are distributed between different members of the microbial community, making it impossible to isolate a single “most important” member . So, the complex task of straw degradation is achieved by the association of microorganisms acting together. Thereby, there is still an ongoing search for cellulolytic microbial consortia which would facilitate straw decomposition. Multiple studies have shown that during the composting of untreated straw with a natural epiphytic microbiome, the microbial community undergoes taxonomic and functional succession . Meanwhile, straw introduction into the soil creates a surplus of nutrients, specifically carbon compounds, which facilitates a new path in the microbiota succession . The aim of this study was to grow a de novo cellulolytic community on sterile straw using soil as a source of degrading microorganisms and exploring its succession stages. As a source of microbiota, we chose chernozem, a soil type common in the southern regions of Russia. Cellulolytic capabilities of the chernozem microbiome were reported earlier . Our team has already worked with chernozem and demonstrated that it can be a potential source of cellulolytic microorganisms by both traditional microbiology and molecular methods . As a source of a lignocellulolytic substrate, we chose oat ( Avena ), a widely cultivated forage crop. A model laboratory experiment of colonizing sterile straw by soil microbiota was set up in order to study the succession of the oat straw decomposition community. We analyzed microbial activity by the measurement of soil respiration (SR), taxonomy succession by the sequencing of 16S rRNA gene for prokaryotes and ITS2 region for fungi on the Illumina Miseq platform, cellulolytic potential of the resulting community by the search for GH genes in the nontargeted metagenome obtained on the Oxford Nanopore MinION platform and functional succession using real-time PCR of the GH gene selection.
2.1. Microbial Activity During the 6 months of the experiment, notable decomposition of straw in nylon sachets was observed. Maximum SR values were detected at the beginning of the experiment, and they declined towards the end. According to one-way ANOVA, carbon dioxide emission rates were separated into three groups with significantly different SR values ( p -value ≤ 0.05), from high to low: (a) 3–21 days, (b) 28–35 days, and (c) 42–182 ( ). The SR experimental values in the first two groups were significantly higher than controls. In the last group, SR experimental values were higher than controls until 133 days, though not significantly in all measurements except one. According to the dynamics of carbon dioxide emission, three phases of microbial activity were distinguished: (1) early, which lasted for the first month; (2) middle, which lasted until the third month; and (3) late, which lasted until the end of the experiment. In the early phase, activity was the highest and was rapidly decreasing towards the end. In the middle phase activity continued to decrease but at a slower pace. In the late phase, stabilization of activity occurred. At the end of each phase, procaryotic and fungal quantities were assessed by calculating ribosomal operons per 1 g of the substrate. It showed a significant increase of bacterial ribosomal operon from the first to the latter phases ( p -value = 0.00421 and 0.000183 respectively) ( ). Fungal ribosomal operon numbers decreased between phases but not significantly. In accordance with the results of the SR measurement, subsequent taxonomical analysis of the dynamics of microbial colonization of straw was performed on substrates from ten sampling periods, covering the entire experiment and different phases of microbial activity: early (days 3, 14, 28), middle (days 49, 63, 91), and late (days 119, 140, 161, 182). 2.2. Microbial Diversity In total, 41 out of 42 libraries of 16S rRNA gene amplicons were left after a quality check. Data from all libraries amounted to 624,236 reads with a median of 13,850, which were attributed to 2062 phylotypes ( ). For the ITS2 fragment amplicons, all 42 libraries passed a quality check. In total, 460,040 reads were acquired, with a median of 8,278.5. Data were attributed to 3,178 phylotypes, but only 43% were assigned to a known kingdom. The decomposing community differed from bulk soil microbiome; they had only 102 common phylotypes of bacteria (22.4% of reads) and 95 common phylotypes of fungi (42.2% of reads) ( ). Both bacterial richness and evenness of the straw decomposing community, assessed by three alpha diversity indices (Observed, Shannon, and Inverted Simpson), significantly increased during the experiment ( a). The lowest values were detected on day 3, which was the earliest sampling point in the analysis; the highest values were reached on day 119, which marks the beginning of the late phase ( ). Alpha diversity indices were negatively correlated with SR values, as shown by Pearson’s product-moment coefficient (−0.6980158, p -value = 0.02479) ( c). Divided into phases, the alpha diversity indices of samples from the early and middle phases did not differ significantly from each other but were significantly lower than those from the last phase. At the same time, the measurement of MPD (mean pairwise distances) showed that the early phase was significantly less diverse than the later phases ( p -value ≤ 0.001) ( ). The early phase was marked by increasing microbial diversity. In the middle phase, the increase slowed down. In the late phase, diversity abruptly reached its maximum values and stayed stable until the end of the experiment. Alpha-diversity of decomposing community remained lower than diversity of the control chernozem soil during all phases ( ). Beta diversity of bacterial community marked differences between different stages of straw colonization, which coincided with alpha diversity. According to PERMANOVA, the dispersion of samples was higher between microbial communities of different phases than within (F = 8.2033, p -value ≤ 0.001). Bacterial samples of the decomposition experiment and control soil were separated along the X -axis of the NMDS plot, while samples from different phases of decomposition were separated along the Y -axis ( a). Dynamics of the decomposing microbiota samples were more pronounced in the early phases than in the latter. Stepwise comparison of beta diversity between the earliest sample with the following ones showed an acceleration of dynamics in the early phase, then a slowdown in the middle with an abrupt increase before the last phase ( ). For the eukaryotic part of the straw-decomposing community, no such tendencies were revealed as they were for the bacterial part. The evenness and richness of the fungi, according to the alpha diversity indices, did not differ significantly between samples and no phases could be distinguished ( b). A similar observation can be made of the beta diversity plot ( b). NMDS shows shifts in diversity between samples, but it was not unidirectional, as for bacteria. Differences in fungi diversity between bulk soil and experiment samples were not as pronounced as for bacteria. Thus, according to alpha and beta metrics, the straw-decomposing bacterial community accumulated diversity during the early and middle phases and reached its peak by the fourth month of the experiment, when it could be considered a mature microbial consortium. The fungal part of the community did not show clear dynamics during its succession. 2.3. Taxonomy Overview During prokaryotic succession, the number of represented phyla in the community increased. The first colonizers on the third day were attributed only to four phyla: Pseudomonadota, Bacteroidota, Bacillota, and Actinobacteriota ( a). On the 14th day Verrucomicrobiota, Myxococcota, Planctomycetota, and Bdellovibrionota appeared. Acidobacteriota appeared on the 49th day. Chloroflexota, Cyanobacterota, Gemmatinomonadota, Spirochaeota, and Thermoproteota appeared on the 91st day. After the 119th day, the maximum presence of bacterial phyla was registered, including Armatimonadota, Ca. Dependentiae, Fibrobacteriota, Nitrospirota, and Patescibacteria. The most frequent genera among bacterial phylotypes were Chitinophaga , Ohtaekwangia , Bacillus , Rhizobium , Pseudomonas , and Inquilinus ( ). Coinciding with the differences, detected by alpha and beta-diversity, taxonomic composition of the decomposing community did not “gravitate” towards microbiome of the control soil but rather developed in its own direction. For example, chernozem soil was abundant in the representatives of Verrucomicrobiota, Acidobacteriota, and Thermoproteota, which did not receive advantage of growing on straw. Therefore, subsequent analysis concentrated on the succession of the decomposing community and not its comparison with the soil microbiome. Fungal diversity was presented by three phyla during the whole sampling period ( b). A major part of fungi phylotypes belonged to Ascomycota. Apart from it, there was a presence of Basidiomycota and Mucoromycota representatives on different sampling days. The most frequent fungi phylotypes were attributed to the genus and species level, including Chloridium aseptatum , Lecythophora canina , Schizothecium inaequale , Albifimbria verrucaria , and Conocybe crispa ( ). 2.4. Community Succession 2.4.1. Data Filtering The peculiarity of the experiment design was that we followed the dynamics of the development of the decomposing community in 10 physically distant compartments–sachets with straw. In order to identify general patterns in the microbiome development and remove random individual outliers of sachets, we left in the analysis only phylotypes found in the decomposing microbiome with the following characteristic: the presence of at least 10 reads in more than 10% of samples. After this filtering, only 321 out of 1063 bacterial phylotypes were left with an additional 101 “major outliers” ( ). For fungi, 68 out of 1264 phylotypes were left in the analysis ( ). Among bacterial representatives in the individual sachets, some unique phylotypes with high read counts were allocated into the “major outliers” group. Dispersion of these phylotypes between days showed that most of them stood not only as outliers of individual sachets but also as technical replicates within one sachet ( ). Among those were representatives of Pseudomonadota ( Pseudomonas , Sphingomonas , and Escherichia ), Bacillota ( Fructilactobacillus , Levilactobacillus , and Lactiplantibacillus ) and Verrucomicrobiota ( Terrimicrobium ) ( ). The filtered set of universally represented phylotypes was used to access the microbial succession during the phases of straw colonization. Since the diversity of microorganisms increased, it was incorrect to apply pairwise sample comparison methods or compositional data analysis methods to this dataset. Therefore, the WGCNA method after variance stabilizing transformation (DESeq2) was used to formalize the association of bacteria into groups characteristic of different colonization phases. Analysis separated phylotypes into four clusters with distinct patterns ( a). Three groups coincided with the earlier established division of the experiment into the three phases of microbial activity–early, middle, and late. The fourth group contained phylotypes, universally spread across the experiment. 2.4.2. Bacterial Phases The first so-called “early” group represented 71 phylotypes, appearing and reaching their maximum in the first month of incubation and disappearing almost completely in later stages. In WGCNA, it corresponds with the salmon cluster ( ). The most abundant phylotypes in this group, which were not necessarily unique in taxonomy for the whole dataset, belonged to Bacteroidota ( Chitinophaga , Dyadobacter , and Flavobacterium ) and Pseudomonadota ( Cupriavidus , Achromobacter , Rhizobium , Pseudomonas , and Lysobacter ). Some of the above and a few more phylotypes from this group were attributed to unique taxa, detected only in this phase, including representatives of Actinobacteriota ( Cellulosimicrobium , Glycomyces , and Microbacterium ), Bacteroidota ( Chryseobacterium and Flavobacterium ), Pseudomonadota ( Achromobacter , Neorhizobium , Cupriavidus , Lysobacter , Massilia , Ensifer , Microvirga , Pseudoduganella , Stenotrophomonas , and Xylophilus ). The second “middle” phase group represented 29 phylotypes, which reached their maximum by the second month of incubation and persisted in the community onwards. By WGCNA, these phylotypes were assigned to the green cluster ( ). The most prominent representatives belonged to Bacteroidota ( Chitinophaga , Ohtaekwangia ), Bacillota ( Bacillus , Solibacillus , Planococcaceae , and Terribacillus ), Pseudomonadota ( Inquilinus , Rhizobium , Bradyrhizobium , Luteibacter , Starkeya , and Luteimonas ), and Planctomycetota ( Singulisphaera ). The third most diverse group represented 139 phylotypes, appearing at the late phase, after three months of incubation. These were represented by the red cluster ( ). In this cluster, major representatives belonged to Bacteroidota ( Ohtaekwangia and Microscillaceae). Numerous representatives of Acidobacteriota, Actinobacteriota ( Conexibacter , Galbitalea , Dactylosporangium , Iamia , and Solirubrobacter ), Verrucomicrobiota, Myxococcota, Cyanobacterota, Chloroflexota, Bdellovibrionota, Spirochaeota, Planctomycetota, Thermoproteota, Gemmatimonadota, and others appeared at this stage. The last group, corresponding to the cyan cluster, contained 82 phylotypes, consistently or without apparent patterns appearing in all samples ( ). Here, most of the universally abundant phylotypes were attributed to Paenibacillus , Starkeya , Pseudoflavitalea , Niastella , and Lysinibacillus . Sporadic appearance of phylotypes from Bacteroididota ( Ohtaekwangia , Chitinophaga ), Actinobacteroidota ( Conexibacter and Actinocorallia ), Verrucomicrobiota ( Terrimicrobium ), and others was noted. To conclude, representatives of Bacteroidota ( Chitinophaga , Ohtaekwangia ) were persistent in all phases of bacterial succession, but each phase had its own phylotypes, attributed to these genera. The early phase was characteristic of Gammaproteobacteria representatives, which disappeared later from the community. The middle phase was specific to a wide variety of Bacillota and Alphaproteobacteria, appearing and persisting in the community. The last phase marked the burst of bacterial diversity from different phyla. 2.4.3. Fungal Phases The WGCNA analysis managed to separate fungi phylotypes into two clusters, one dispersed across all succession (salmon) and one corresponding with the middle-to-late phase (green) ( b). Coinciding with alpha and beta diversity analyses, many fungal phylotypes were detected at all phases of the experiment, with only some species demonstrating differences according to the day of sampling ( ). Coprinellus flocculosus and Schizothecium inaequale were appearing in the fungal community since the early phase, while Chloridium aseptatum , Lecythophora canina , Marquandomyces marquandii and Scytalidium were appearing mainly after the second month of the experiment. Phylotypes belonging to Ascomycota ( Albifimbria , Coniochaetaceae, Gibberella humicola ), Basidiomycota ( Conocybe , Occultifur , Waitea ), and Mucoromycota ( Actinomucor ) were periodically encountered in the dataset. 2.5. Functional Distribution of Glycoside hydrolases in the Mature Decomposing Community The transition between middle and late phases of the succession of the decomposing community marked maximum microbial diversity. Additionally, SR data showed that after 3 months of the experiment, microbial activity had stabilized. Taking these considerations into account, the 3-month sample, the borderline between the middle and late phases of straw decomposing microbial community succession, was chosen for the functional analysis and the search of the GH genes. The resulting yield of full metagenome sequencing of DNA from the 91-day sample representing this phase was 10.9 Mbp, with N50 of 4886. The metagenome was polished and annotated, and only genes annotated as belonging to the CAZy database were investigated further. The metagenome contained 83.9% bacterial contigs, and only 1.8% belonged to fungi. The rest were attributed to Metazoa, Plants, and Archaea. According to the CAZy database, the metagenome of the decomposing microbial community contained 1388 GH genes, 1194 of which belonged to Bacteria and 193 to Fungi ( ). As assigned by EggNogg, the most abundant CAZy genes were attributed to Pseudomonadota (Xanthomonadales, Sphingomonadales, Bradyrhizobiaceae, Rhizobiaceae) (455 genes), Bacteroidetes (Sphingobacteriales, Cytophagales) (339 genes), Actinobacteriota (Streptosporangiales) (156 genes), and Bacillota (60 genes) phyla for bacteria and the Ascomycota (Sordariomycetes) (191 genes) phylum for fungi. So, out of the four most major phyla in the bacterial part of the decomposing microbial community detected by Illumina sequencing, all were also represented by the highest quantities of GH genes. However, according to 16S rRNA gene sequencing data, the relative abundance of Bacillota was higher than Actinobacteriota in all analyzed days of the experiment, while the relative content of GH genes attributed to these phyla was reversed. According to the CAZy classification, the most represented GH families in the metagenome of the three-month-old straw decomposing community were attributed to GH3 (227), GH31 (117), GH18 (114), and GH20 (91). According to the main functions of the presented GH families, three major groups in the metagenome were distinguished: those connected to cellulose degradation (“cellulose” group), those connected to metabolism of simple carbohydrates (“carbohydrate” group), and those connected to chitin degradation (“chitin” group) ( ). The main representatives of the “cellulose” group in this dataset belonged to GH3, GH5, GH9, GH30, GH43, and GH94 families. Families from the “carbohydrates” group included GH31, GH95, GH15, and GH77. A notable presence was detected in the families from the “chitin” group, including GH18, GH19, and GH20. All these GH families from all three groups were found in almost all phyla, detected by 16S rRNA and ITS2 amplicon sequencing, and their relative abundance coincided with the taxonomy data. Pseudomonadota had all groups present, and the “cellulose” group was the most abundant, followed by the “carbohydrate” and then the “chitin”. For Bacteroidota, Actinobacteriota, Bacillota, Acidobacteriota, and Planctomycetota phyla “cellulose” and “carbohydrate” groups were equally represented, while the “chitin” group was less present than the other two. The “cellulose” and “carbohydrate” groups were also detected in minor quantities in Verrucomicrobiota, Cyanobacterota, and Chloroflexota. As for the fungal part of the decomposing community, in Ascomycota “the chitin” group of GHs had more matches than the “cellulose” and “carbohydrate” groups. For Basidiomycota, only one gene was found, attributed to the “chitin” group. To conclude, according to the search of GH genes in the mature straw decomposing microbial consortia, functionally they were represented by GH, involved in cellulose, simple carbohydrates, and chitin utilization. The main carriers of these genes coincided with bacterial and fungal phyla, appearing in the community from the first days of straw colonization. 2.6. Succession of GH Genes during Phases of Decomposition To assess functional dynamics of degradation phases, a set of 23 GH genes, found in the metagenome and connected to cellulose decomposition, was chosen for the primer construction ( ) and real-time PCR analysis. They represented various GH families and were attributed to several genera found in the microbial community by the earlier analyses. The data was log-transformed and difference in phase distribution was calculated relatively to day 3 of the experiment. As a result, most of the tested GH genes showed maximum presence at the middle phase of the cellulose colonization, regardless of their function ( ). The presence of several GH genes did not alter between phases. According to PERMANOVA, differences in the dynamics of the selected GH genes were significantly explained by taxon attribution (R 2 = 0.54896, p -value = 0.007) and not by GH family attribution (R 2 = 0.18580, p -value = 0.499) ( ). This effect is presented on the WPGMA clustering of the real-time data ( ): GH genes are grouped according to the genus and not the GH family. So, in the long-term succession of the microbial community, the presence of the GH genes was determined not by the stage of cellulolytic substrate decomposition but by the microbiota inhabiting the community at a certain point in time.
During the 6 months of the experiment, notable decomposition of straw in nylon sachets was observed. Maximum SR values were detected at the beginning of the experiment, and they declined towards the end. According to one-way ANOVA, carbon dioxide emission rates were separated into three groups with significantly different SR values ( p -value ≤ 0.05), from high to low: (a) 3–21 days, (b) 28–35 days, and (c) 42–182 ( ). The SR experimental values in the first two groups were significantly higher than controls. In the last group, SR experimental values were higher than controls until 133 days, though not significantly in all measurements except one. According to the dynamics of carbon dioxide emission, three phases of microbial activity were distinguished: (1) early, which lasted for the first month; (2) middle, which lasted until the third month; and (3) late, which lasted until the end of the experiment. In the early phase, activity was the highest and was rapidly decreasing towards the end. In the middle phase activity continued to decrease but at a slower pace. In the late phase, stabilization of activity occurred. At the end of each phase, procaryotic and fungal quantities were assessed by calculating ribosomal operons per 1 g of the substrate. It showed a significant increase of bacterial ribosomal operon from the first to the latter phases ( p -value = 0.00421 and 0.000183 respectively) ( ). Fungal ribosomal operon numbers decreased between phases but not significantly. In accordance with the results of the SR measurement, subsequent taxonomical analysis of the dynamics of microbial colonization of straw was performed on substrates from ten sampling periods, covering the entire experiment and different phases of microbial activity: early (days 3, 14, 28), middle (days 49, 63, 91), and late (days 119, 140, 161, 182).
In total, 41 out of 42 libraries of 16S rRNA gene amplicons were left after a quality check. Data from all libraries amounted to 624,236 reads with a median of 13,850, which were attributed to 2062 phylotypes ( ). For the ITS2 fragment amplicons, all 42 libraries passed a quality check. In total, 460,040 reads were acquired, with a median of 8,278.5. Data were attributed to 3,178 phylotypes, but only 43% were assigned to a known kingdom. The decomposing community differed from bulk soil microbiome; they had only 102 common phylotypes of bacteria (22.4% of reads) and 95 common phylotypes of fungi (42.2% of reads) ( ). Both bacterial richness and evenness of the straw decomposing community, assessed by three alpha diversity indices (Observed, Shannon, and Inverted Simpson), significantly increased during the experiment ( a). The lowest values were detected on day 3, which was the earliest sampling point in the analysis; the highest values were reached on day 119, which marks the beginning of the late phase ( ). Alpha diversity indices were negatively correlated with SR values, as shown by Pearson’s product-moment coefficient (−0.6980158, p -value = 0.02479) ( c). Divided into phases, the alpha diversity indices of samples from the early and middle phases did not differ significantly from each other but were significantly lower than those from the last phase. At the same time, the measurement of MPD (mean pairwise distances) showed that the early phase was significantly less diverse than the later phases ( p -value ≤ 0.001) ( ). The early phase was marked by increasing microbial diversity. In the middle phase, the increase slowed down. In the late phase, diversity abruptly reached its maximum values and stayed stable until the end of the experiment. Alpha-diversity of decomposing community remained lower than diversity of the control chernozem soil during all phases ( ). Beta diversity of bacterial community marked differences between different stages of straw colonization, which coincided with alpha diversity. According to PERMANOVA, the dispersion of samples was higher between microbial communities of different phases than within (F = 8.2033, p -value ≤ 0.001). Bacterial samples of the decomposition experiment and control soil were separated along the X -axis of the NMDS plot, while samples from different phases of decomposition were separated along the Y -axis ( a). Dynamics of the decomposing microbiota samples were more pronounced in the early phases than in the latter. Stepwise comparison of beta diversity between the earliest sample with the following ones showed an acceleration of dynamics in the early phase, then a slowdown in the middle with an abrupt increase before the last phase ( ). For the eukaryotic part of the straw-decomposing community, no such tendencies were revealed as they were for the bacterial part. The evenness and richness of the fungi, according to the alpha diversity indices, did not differ significantly between samples and no phases could be distinguished ( b). A similar observation can be made of the beta diversity plot ( b). NMDS shows shifts in diversity between samples, but it was not unidirectional, as for bacteria. Differences in fungi diversity between bulk soil and experiment samples were not as pronounced as for bacteria. Thus, according to alpha and beta metrics, the straw-decomposing bacterial community accumulated diversity during the early and middle phases and reached its peak by the fourth month of the experiment, when it could be considered a mature microbial consortium. The fungal part of the community did not show clear dynamics during its succession.
During prokaryotic succession, the number of represented phyla in the community increased. The first colonizers on the third day were attributed only to four phyla: Pseudomonadota, Bacteroidota, Bacillota, and Actinobacteriota ( a). On the 14th day Verrucomicrobiota, Myxococcota, Planctomycetota, and Bdellovibrionota appeared. Acidobacteriota appeared on the 49th day. Chloroflexota, Cyanobacterota, Gemmatinomonadota, Spirochaeota, and Thermoproteota appeared on the 91st day. After the 119th day, the maximum presence of bacterial phyla was registered, including Armatimonadota, Ca. Dependentiae, Fibrobacteriota, Nitrospirota, and Patescibacteria. The most frequent genera among bacterial phylotypes were Chitinophaga , Ohtaekwangia , Bacillus , Rhizobium , Pseudomonas , and Inquilinus ( ). Coinciding with the differences, detected by alpha and beta-diversity, taxonomic composition of the decomposing community did not “gravitate” towards microbiome of the control soil but rather developed in its own direction. For example, chernozem soil was abundant in the representatives of Verrucomicrobiota, Acidobacteriota, and Thermoproteota, which did not receive advantage of growing on straw. Therefore, subsequent analysis concentrated on the succession of the decomposing community and not its comparison with the soil microbiome. Fungal diversity was presented by three phyla during the whole sampling period ( b). A major part of fungi phylotypes belonged to Ascomycota. Apart from it, there was a presence of Basidiomycota and Mucoromycota representatives on different sampling days. The most frequent fungi phylotypes were attributed to the genus and species level, including Chloridium aseptatum , Lecythophora canina , Schizothecium inaequale , Albifimbria verrucaria , and Conocybe crispa ( ).
2.4.1. Data Filtering The peculiarity of the experiment design was that we followed the dynamics of the development of the decomposing community in 10 physically distant compartments–sachets with straw. In order to identify general patterns in the microbiome development and remove random individual outliers of sachets, we left in the analysis only phylotypes found in the decomposing microbiome with the following characteristic: the presence of at least 10 reads in more than 10% of samples. After this filtering, only 321 out of 1063 bacterial phylotypes were left with an additional 101 “major outliers” ( ). For fungi, 68 out of 1264 phylotypes were left in the analysis ( ). Among bacterial representatives in the individual sachets, some unique phylotypes with high read counts were allocated into the “major outliers” group. Dispersion of these phylotypes between days showed that most of them stood not only as outliers of individual sachets but also as technical replicates within one sachet ( ). Among those were representatives of Pseudomonadota ( Pseudomonas , Sphingomonas , and Escherichia ), Bacillota ( Fructilactobacillus , Levilactobacillus , and Lactiplantibacillus ) and Verrucomicrobiota ( Terrimicrobium ) ( ). The filtered set of universally represented phylotypes was used to access the microbial succession during the phases of straw colonization. Since the diversity of microorganisms increased, it was incorrect to apply pairwise sample comparison methods or compositional data analysis methods to this dataset. Therefore, the WGCNA method after variance stabilizing transformation (DESeq2) was used to formalize the association of bacteria into groups characteristic of different colonization phases. Analysis separated phylotypes into four clusters with distinct patterns ( a). Three groups coincided with the earlier established division of the experiment into the three phases of microbial activity–early, middle, and late. The fourth group contained phylotypes, universally spread across the experiment. 2.4.2. Bacterial Phases The first so-called “early” group represented 71 phylotypes, appearing and reaching their maximum in the first month of incubation and disappearing almost completely in later stages. In WGCNA, it corresponds with the salmon cluster ( ). The most abundant phylotypes in this group, which were not necessarily unique in taxonomy for the whole dataset, belonged to Bacteroidota ( Chitinophaga , Dyadobacter , and Flavobacterium ) and Pseudomonadota ( Cupriavidus , Achromobacter , Rhizobium , Pseudomonas , and Lysobacter ). Some of the above and a few more phylotypes from this group were attributed to unique taxa, detected only in this phase, including representatives of Actinobacteriota ( Cellulosimicrobium , Glycomyces , and Microbacterium ), Bacteroidota ( Chryseobacterium and Flavobacterium ), Pseudomonadota ( Achromobacter , Neorhizobium , Cupriavidus , Lysobacter , Massilia , Ensifer , Microvirga , Pseudoduganella , Stenotrophomonas , and Xylophilus ). The second “middle” phase group represented 29 phylotypes, which reached their maximum by the second month of incubation and persisted in the community onwards. By WGCNA, these phylotypes were assigned to the green cluster ( ). The most prominent representatives belonged to Bacteroidota ( Chitinophaga , Ohtaekwangia ), Bacillota ( Bacillus , Solibacillus , Planococcaceae , and Terribacillus ), Pseudomonadota ( Inquilinus , Rhizobium , Bradyrhizobium , Luteibacter , Starkeya , and Luteimonas ), and Planctomycetota ( Singulisphaera ). The third most diverse group represented 139 phylotypes, appearing at the late phase, after three months of incubation. These were represented by the red cluster ( ). In this cluster, major representatives belonged to Bacteroidota ( Ohtaekwangia and Microscillaceae). Numerous representatives of Acidobacteriota, Actinobacteriota ( Conexibacter , Galbitalea , Dactylosporangium , Iamia , and Solirubrobacter ), Verrucomicrobiota, Myxococcota, Cyanobacterota, Chloroflexota, Bdellovibrionota, Spirochaeota, Planctomycetota, Thermoproteota, Gemmatimonadota, and others appeared at this stage. The last group, corresponding to the cyan cluster, contained 82 phylotypes, consistently or without apparent patterns appearing in all samples ( ). Here, most of the universally abundant phylotypes were attributed to Paenibacillus , Starkeya , Pseudoflavitalea , Niastella , and Lysinibacillus . Sporadic appearance of phylotypes from Bacteroididota ( Ohtaekwangia , Chitinophaga ), Actinobacteroidota ( Conexibacter and Actinocorallia ), Verrucomicrobiota ( Terrimicrobium ), and others was noted. To conclude, representatives of Bacteroidota ( Chitinophaga , Ohtaekwangia ) were persistent in all phases of bacterial succession, but each phase had its own phylotypes, attributed to these genera. The early phase was characteristic of Gammaproteobacteria representatives, which disappeared later from the community. The middle phase was specific to a wide variety of Bacillota and Alphaproteobacteria, appearing and persisting in the community. The last phase marked the burst of bacterial diversity from different phyla. 2.4.3. Fungal Phases The WGCNA analysis managed to separate fungi phylotypes into two clusters, one dispersed across all succession (salmon) and one corresponding with the middle-to-late phase (green) ( b). Coinciding with alpha and beta diversity analyses, many fungal phylotypes were detected at all phases of the experiment, with only some species demonstrating differences according to the day of sampling ( ). Coprinellus flocculosus and Schizothecium inaequale were appearing in the fungal community since the early phase, while Chloridium aseptatum , Lecythophora canina , Marquandomyces marquandii and Scytalidium were appearing mainly after the second month of the experiment. Phylotypes belonging to Ascomycota ( Albifimbria , Coniochaetaceae, Gibberella humicola ), Basidiomycota ( Conocybe , Occultifur , Waitea ), and Mucoromycota ( Actinomucor ) were periodically encountered in the dataset.
The peculiarity of the experiment design was that we followed the dynamics of the development of the decomposing community in 10 physically distant compartments–sachets with straw. In order to identify general patterns in the microbiome development and remove random individual outliers of sachets, we left in the analysis only phylotypes found in the decomposing microbiome with the following characteristic: the presence of at least 10 reads in more than 10% of samples. After this filtering, only 321 out of 1063 bacterial phylotypes were left with an additional 101 “major outliers” ( ). For fungi, 68 out of 1264 phylotypes were left in the analysis ( ). Among bacterial representatives in the individual sachets, some unique phylotypes with high read counts were allocated into the “major outliers” group. Dispersion of these phylotypes between days showed that most of them stood not only as outliers of individual sachets but also as technical replicates within one sachet ( ). Among those were representatives of Pseudomonadota ( Pseudomonas , Sphingomonas , and Escherichia ), Bacillota ( Fructilactobacillus , Levilactobacillus , and Lactiplantibacillus ) and Verrucomicrobiota ( Terrimicrobium ) ( ). The filtered set of universally represented phylotypes was used to access the microbial succession during the phases of straw colonization. Since the diversity of microorganisms increased, it was incorrect to apply pairwise sample comparison methods or compositional data analysis methods to this dataset. Therefore, the WGCNA method after variance stabilizing transformation (DESeq2) was used to formalize the association of bacteria into groups characteristic of different colonization phases. Analysis separated phylotypes into four clusters with distinct patterns ( a). Three groups coincided with the earlier established division of the experiment into the three phases of microbial activity–early, middle, and late. The fourth group contained phylotypes, universally spread across the experiment.
The first so-called “early” group represented 71 phylotypes, appearing and reaching their maximum in the first month of incubation and disappearing almost completely in later stages. In WGCNA, it corresponds with the salmon cluster ( ). The most abundant phylotypes in this group, which were not necessarily unique in taxonomy for the whole dataset, belonged to Bacteroidota ( Chitinophaga , Dyadobacter , and Flavobacterium ) and Pseudomonadota ( Cupriavidus , Achromobacter , Rhizobium , Pseudomonas , and Lysobacter ). Some of the above and a few more phylotypes from this group were attributed to unique taxa, detected only in this phase, including representatives of Actinobacteriota ( Cellulosimicrobium , Glycomyces , and Microbacterium ), Bacteroidota ( Chryseobacterium and Flavobacterium ), Pseudomonadota ( Achromobacter , Neorhizobium , Cupriavidus , Lysobacter , Massilia , Ensifer , Microvirga , Pseudoduganella , Stenotrophomonas , and Xylophilus ). The second “middle” phase group represented 29 phylotypes, which reached their maximum by the second month of incubation and persisted in the community onwards. By WGCNA, these phylotypes were assigned to the green cluster ( ). The most prominent representatives belonged to Bacteroidota ( Chitinophaga , Ohtaekwangia ), Bacillota ( Bacillus , Solibacillus , Planococcaceae , and Terribacillus ), Pseudomonadota ( Inquilinus , Rhizobium , Bradyrhizobium , Luteibacter , Starkeya , and Luteimonas ), and Planctomycetota ( Singulisphaera ). The third most diverse group represented 139 phylotypes, appearing at the late phase, after three months of incubation. These were represented by the red cluster ( ). In this cluster, major representatives belonged to Bacteroidota ( Ohtaekwangia and Microscillaceae). Numerous representatives of Acidobacteriota, Actinobacteriota ( Conexibacter , Galbitalea , Dactylosporangium , Iamia , and Solirubrobacter ), Verrucomicrobiota, Myxococcota, Cyanobacterota, Chloroflexota, Bdellovibrionota, Spirochaeota, Planctomycetota, Thermoproteota, Gemmatimonadota, and others appeared at this stage. The last group, corresponding to the cyan cluster, contained 82 phylotypes, consistently or without apparent patterns appearing in all samples ( ). Here, most of the universally abundant phylotypes were attributed to Paenibacillus , Starkeya , Pseudoflavitalea , Niastella , and Lysinibacillus . Sporadic appearance of phylotypes from Bacteroididota ( Ohtaekwangia , Chitinophaga ), Actinobacteroidota ( Conexibacter and Actinocorallia ), Verrucomicrobiota ( Terrimicrobium ), and others was noted. To conclude, representatives of Bacteroidota ( Chitinophaga , Ohtaekwangia ) were persistent in all phases of bacterial succession, but each phase had its own phylotypes, attributed to these genera. The early phase was characteristic of Gammaproteobacteria representatives, which disappeared later from the community. The middle phase was specific to a wide variety of Bacillota and Alphaproteobacteria, appearing and persisting in the community. The last phase marked the burst of bacterial diversity from different phyla.
The WGCNA analysis managed to separate fungi phylotypes into two clusters, one dispersed across all succession (salmon) and one corresponding with the middle-to-late phase (green) ( b). Coinciding with alpha and beta diversity analyses, many fungal phylotypes were detected at all phases of the experiment, with only some species demonstrating differences according to the day of sampling ( ). Coprinellus flocculosus and Schizothecium inaequale were appearing in the fungal community since the early phase, while Chloridium aseptatum , Lecythophora canina , Marquandomyces marquandii and Scytalidium were appearing mainly after the second month of the experiment. Phylotypes belonging to Ascomycota ( Albifimbria , Coniochaetaceae, Gibberella humicola ), Basidiomycota ( Conocybe , Occultifur , Waitea ), and Mucoromycota ( Actinomucor ) were periodically encountered in the dataset.
The transition between middle and late phases of the succession of the decomposing community marked maximum microbial diversity. Additionally, SR data showed that after 3 months of the experiment, microbial activity had stabilized. Taking these considerations into account, the 3-month sample, the borderline between the middle and late phases of straw decomposing microbial community succession, was chosen for the functional analysis and the search of the GH genes. The resulting yield of full metagenome sequencing of DNA from the 91-day sample representing this phase was 10.9 Mbp, with N50 of 4886. The metagenome was polished and annotated, and only genes annotated as belonging to the CAZy database were investigated further. The metagenome contained 83.9% bacterial contigs, and only 1.8% belonged to fungi. The rest were attributed to Metazoa, Plants, and Archaea. According to the CAZy database, the metagenome of the decomposing microbial community contained 1388 GH genes, 1194 of which belonged to Bacteria and 193 to Fungi ( ). As assigned by EggNogg, the most abundant CAZy genes were attributed to Pseudomonadota (Xanthomonadales, Sphingomonadales, Bradyrhizobiaceae, Rhizobiaceae) (455 genes), Bacteroidetes (Sphingobacteriales, Cytophagales) (339 genes), Actinobacteriota (Streptosporangiales) (156 genes), and Bacillota (60 genes) phyla for bacteria and the Ascomycota (Sordariomycetes) (191 genes) phylum for fungi. So, out of the four most major phyla in the bacterial part of the decomposing microbial community detected by Illumina sequencing, all were also represented by the highest quantities of GH genes. However, according to 16S rRNA gene sequencing data, the relative abundance of Bacillota was higher than Actinobacteriota in all analyzed days of the experiment, while the relative content of GH genes attributed to these phyla was reversed. According to the CAZy classification, the most represented GH families in the metagenome of the three-month-old straw decomposing community were attributed to GH3 (227), GH31 (117), GH18 (114), and GH20 (91). According to the main functions of the presented GH families, three major groups in the metagenome were distinguished: those connected to cellulose degradation (“cellulose” group), those connected to metabolism of simple carbohydrates (“carbohydrate” group), and those connected to chitin degradation (“chitin” group) ( ). The main representatives of the “cellulose” group in this dataset belonged to GH3, GH5, GH9, GH30, GH43, and GH94 families. Families from the “carbohydrates” group included GH31, GH95, GH15, and GH77. A notable presence was detected in the families from the “chitin” group, including GH18, GH19, and GH20. All these GH families from all three groups were found in almost all phyla, detected by 16S rRNA and ITS2 amplicon sequencing, and their relative abundance coincided with the taxonomy data. Pseudomonadota had all groups present, and the “cellulose” group was the most abundant, followed by the “carbohydrate” and then the “chitin”. For Bacteroidota, Actinobacteriota, Bacillota, Acidobacteriota, and Planctomycetota phyla “cellulose” and “carbohydrate” groups were equally represented, while the “chitin” group was less present than the other two. The “cellulose” and “carbohydrate” groups were also detected in minor quantities in Verrucomicrobiota, Cyanobacterota, and Chloroflexota. As for the fungal part of the decomposing community, in Ascomycota “the chitin” group of GHs had more matches than the “cellulose” and “carbohydrate” groups. For Basidiomycota, only one gene was found, attributed to the “chitin” group. To conclude, according to the search of GH genes in the mature straw decomposing microbial consortia, functionally they were represented by GH, involved in cellulose, simple carbohydrates, and chitin utilization. The main carriers of these genes coincided with bacterial and fungal phyla, appearing in the community from the first days of straw colonization.
To assess functional dynamics of degradation phases, a set of 23 GH genes, found in the metagenome and connected to cellulose decomposition, was chosen for the primer construction ( ) and real-time PCR analysis. They represented various GH families and were attributed to several genera found in the microbial community by the earlier analyses. The data was log-transformed and difference in phase distribution was calculated relatively to day 3 of the experiment. As a result, most of the tested GH genes showed maximum presence at the middle phase of the cellulose colonization, regardless of their function ( ). The presence of several GH genes did not alter between phases. According to PERMANOVA, differences in the dynamics of the selected GH genes were significantly explained by taxon attribution (R 2 = 0.54896, p -value = 0.007) and not by GH family attribution (R 2 = 0.18580, p -value = 0.499) ( ). This effect is presented on the WPGMA clustering of the real-time data ( ): GH genes are grouped according to the genus and not the GH family. So, in the long-term succession of the microbial community, the presence of the GH genes was determined not by the stage of cellulolytic substrate decomposition but by the microbiota inhabiting the community at a certain point in time.
Soil is a complex substrate containing nutrients in a variety of forms, from easy-to-digest to recalcitrant. Moreover, this environment is under constant biotic and abiotic stress. All this forms a complex soil microbiota, consisting of a plethora of microorganisms adapted to various nutritional and climatic conditions. Earlier studies already used soil as a source of active microbiota in the experiments on the decomposition of various substrates , but, as a rule, they did not remove the surface microbiome from the substrate, which distorted results of the microbial succession. The setting of our experiment allowed us to exclude this effect and analyze the process of the de novo colonization of the lignocellulosic substrate by the chernozem microbiota and identify its most prominent phases during the long-term experiment. The chernozem from Kamennaya steppe, which was used in the current experiment, was recognized as having a potentially high biological activity and diversity of microbial communities . We worked with this soil earlier and showed that it contains potential cellulolytic microorganisms, but plating the soil on the cellulose-containing medium drastically shifts contents of the initial microbial community, giving the ecological advantage to those bacteria, which have not previously been predominant . Along with these results, the composition of the mature decomposing community strongly differentiated from the priming soil microbiome, which could be explained by the fact that soil and straw are environmental niches which provide benefits to different groups of microorganisms. The diversity of the cellulolytic community remained lower than those of the primary soil, even after 6 months of incubation. The microbiome of the cellulolytic community carried a resemblance to the soil microbiome, e.g., representatives from Bacillota ( Bacillus , Planococcaceae ) and Gammaproteobacteria ( Pseudomonas , Massilia ), but in most cases they were not the main components. Due to the design of our experiment, the only measured agrophysical parameter was soil respiration (SR), which is defined as the process of carbon dioxide released by microorganisms. The application of this method has shown its effectiveness in assessing microbial activity in response to anthropogenic agricultural practices . Maximum values of SR were detected on the first measurement on the third day of the experiment, after which a significant decline in SR values, specifically after the second month, was detected. Previously the effect of elevated values of SR during cellulose decomposition was associated with the introduction of additional glucose to the substrate . Thus, our results could be explained by the particular substrate of lignocellulose we used: oat straw has a high content of water-soluble carbon, which is more accessible to microorganisms than cellulose . It might have led to a higher microbial activity in the early phase connected with the catalysis of simple carbohydrates present in the unaltered straw. Depending on the design and the duration of the experiment on the straw decomposition, two or three phases could be distinguished in a process of microbial succession [ , , ]. Our data allowed us to distinguish three phases of bacterial succession during the decomposition of lignocellulosic subtract: early (first month), middle (second to third month), and late (fourth to sixth month). This distinction was supported by the microbial activity assessed by carbon dioxide emission, by the bacterial quantities assessed by the real-time PCR, and by the bacterial dynamics assessed by the high-throughput 16S rRNA gene sequencing. Despite the fact that the experiment was laid in multiple separate nylon bags, the pattern of microbial succession turned out to be common, with the exception of several outlier phylotypes. Each phase was characteristic of a group of microorganisms consisting of several dozens of co-variable bacterial phylotypes. These phylotypes included both taxa unique for each phase and common throughout the experiment. These findings coincide with the functional differences in cellulolytic community between phases: the difference in the patterns of GH gene presence was connected to the bacterial host and not to the family of the enzyme. Despite the evidence that the early phase of community formation involved the degradation of simple carbohydrates, early microbial colonizers of straw were potentially cellulose-degrading organisms. Among those appeared representatives of actinomycetes, which are known to be active producers of secondary metabolites . For instance, Cellulosimicrobium was reported to be a normal part of soil microbiota and to have cellulase and xylanase activities [ , , , ]. Some strains of Microbacterium were reported to have cellulolytic activities . However, they reached maximum diversity by the late phase. Some minor representatives from different phyla of the early succession phase, including Streptomyces , Chryseobacterium , and Dyadobacter , were reported to be able to degrade lignocellulose. The early stages were also characterized by a high relative representation of Gammaproteobacteria ( Pseudomonas , Cupriavidus , Massilia ) and Alphaproteobacteria (Rhizobiaceae); most of them were reported to contain a lot of cellulase-active GHs. It was established by earlier findings that Pseudomonadota, specifically Alpha- and Gammaproteobacteria, play a major role in cellulose decomposition . In accordance with these data, in this study, about half of the GH found in the metagenome of the community involved in the decomposition of cellulose belongs to the representatives of Pseudomonadota. Bacillota were present at all phases, but most prominently they populated the microbial community in the middle phase. This is consistent with the findings that Bacillota appear after the initial stage of lignocellulose decomposition . Many genera of this phylum, detected in this dataset, were reported to have cellulolytic strains, including Bacillus , Paenibacillus , Lysinibacillus . A relatively low content of GH genes in the representatives of this phylum was shown, but it could be explained by the differences in annotation bases for 16S rRNA and metagenome data and low coverage of the metagenome assembly. The most prominent role of the straw decomposition community in this experiment was played by Bacteroidota. A wide range of microorganisms from this phylum is known to play an important role in the decomposition of various polymers . In our work, it was shown that these microorganisms are present at every succession stage, with the succession of some representatives of this phylum ( Chitinofaga ) by others ( Ohtaekwangia ). Moreover, this phylum accounted for the second largest part of GH genes found in the metagenome. It is worth noting that according to the Polysaccharide Utilization Loci (PUL) DataBase, the major representative of the early community Chitinofaga is rich in PULs, which is a marker of active cleavage of complex polysaccharide substrates already at the early stages . Although a significant proportion of microorganisms not associated with lignocellulose decomposition appear in the later stages of decomposition, we can assume they are an important part of the stable cellulolytic community. For example, it is known that enzymes associated with sulfur metabolism may play an important role in the decomposition of complex straw components, such as polyphenol compounds . The presence of specific nitrifiers and methylotrophs in the community ( Nitrocosmicus , Nitrospira ) can play an important role in the construction of efficient communities. The role of nitrogen exchange in catalytic soil systems is underestimated because, in addition to the competition for carbon sources, the high competition for free nitrogen should also be considered . Starkeya , one of the major inhabitants of the middle phase, was described to have a chemolithoautotrophic lifestyle, which allows it to both consume carbon dioxide and produce it . Conexibacter , which appears in the late phase, was isolated as a soil bacterium, involved in the carbon and nitrogen cycle . The appearance of predatory microorganisms (obligate—Bdellovibrionota, Vampiriovibrionota; facultative—Myxococcota, Cytophaga , Lysobacter ) at the different phases also indicates the reorientation of the community from simple carbohydrate catalysis since it is known to be a powerful factor in the dynamics of microbial succession . There is also evidence that some of the genera detected at various phases of decomposition ( Pseudomonas , Planctomyces , Vampiriovibrio , Luteibacter ) can be accompanying microflora, which act as secondary consumers . These examples expand the understanding of the complexity of interactions between community members. We showed an increase in bacterial diversity and its phylogenetic diversity and succession from a relatively simple cellulolytic community to a complex microbial community of autochthonous microorganisms with variable functions in the community. At the same time, we did not observe an increase in fungal diversity. This may be linked to the difference in the life cycle duration of these groups of microorganisms. Full metagenome sequencing revealed that fungi accounted only for less than 2% of the contigs. It contradicts the real-time data, which showed high quantities of fungal ribosomal operons at the end of the middle phase. This could be due to a number of factors: less efficient fungal DNA isolation, less efficient nanopore sequencing of fungal DNA or lower quality assemblies due to the large size and diversity of fungal genomes. In contrast to bacterial succession, only two phases were identified in fungal succession, but many major phylotypes were present in all phases. Many fungi found in the community were described as saprophytic with various enzymatic activities. The early phase one was specific in Schizothecium inaequale , which is described both as a coprophilous and endophytic fungus . Another endophyte, associated with decaying matter, was Coprinellus flocculosus , which is a mushroom-forming fungus . Species associated with the late phase of straw decomposition were reported to be endophytic with high enzymatic activities— Chloridium aseptatum and Scytalidium . Other saprophytic fungi found in the community, usually associated with soil or plants and reported to have high enzymatic activity, were Albifimbria verrucaria , Chaetomiaceae , Occultifur , Waitea circinata . Coniochaetaceae, which is widely presented in the later phase, is a family with well-known lignocellulolytic fungi . The fungal consortium in the early phase was also presented by the known food mold Actinomucor elegans , which was reported to have high enzyme activity, including protease, lipase, glutaminase, and others . The major phylotypes of fungi described above, with the exception of Coniochaetaceae, are not described in the literature as typical cellulolytic organisms. So, the analysis of the role of the fungal fraction in the current experiment remains unclear. Nevertheless, both bacteria and fungi are important players in lignocellulose decomposition [ , , , ]. It is known that, in spite of the fact that the fraction of nucleic acids encoding CAZymes belonging to the bacterial component is relatively superior to the fungal component of the community, functionally it is fungal enzymes that can play the main role in the degradation of the lignocellulosic complex [ , , ]. It cannot be excluded that the decrease in diversity and the shift of the fungal community from Mucoromycota and Basidiomycota at early stages to Ascomycota at later stages was the result of the antifungal activity of the microbial community. For instance, one of the main components of the core microbial community was Chitinophaga , which specializes on mycelium degradation . Mucilaginibacter , found in the late phase, potentially can be a mycophagous bacteria . This assumption is also confirmed by the high number of chitinases we found in the bacterial part of the microbial community. The presence of both bacterial and fungal chitinases in the metagenome of the mature decomposing consortium is indicative of the potential counteraction of these two community components.
4.1. Experiment Design The idea of the experiment was to model the dynamics of the formation of cellulolytic consortium from soil microbiota using straw as a substrate and study its colonization process. To achieve this sterilized straw in nylon sachets was submerged in the soil for six months. Fallow chernozem from the Agroecological Station “Kamennaya Steppe” of the Dokuchaev Research Agricultural Institute in the Voronezskaya area was chosen as a source of decomposing microbiota. This soil was removed from crop rotation more than 100 years ago. Before that it was used for sowing wheat. Its characteristics were: C total 4.86 ± 0.12%; pH salt 6.40 ± 0.08; N total 0.533 ± 0.02. As a source of lignocellulose biomass oat ( Avena ) straw was used with the following characteristics: ash 9.98 ± 2.04, N total 1.897 ± 0.012, C:N 23.5, water-soluble carbon 11.8 ± 0.50 g/kg. The experiment took place in 2018. The soil was ground and sieved at 5 mm, watered to 60% of the full moisture capacity, placed in the 2-liter plastic containers, and left to rest for 2 weeks to eliminate the effect of these manipulations on the CO 2 emission. Straw was shredded into 0–2 mm particles, 1 g portions were placed in the small nylon sachets and were subjected to E-beam sterilization. Wetted sachets (10 per container) were placed vertically in rows at a depth of 0.5–4 cm inside seven replicate containers with pre-prepared soil. Additionally, five replicate control containers with soil and without straw were laid at the same time. More detailed information about the experiment layout was described earlier . The humidity of the substrates was kept constant at 60% and the temperature was maintained at 28 ± 1 °C for the duration of the experiment. 4.2. Microbial Activity Test by the SR Measurement To assess microbial activity linked to straw decomposition during the experiment, soil respiration (SR) in the experimental and control containers was measured weekly for 6 months using the conventional alkali absorption method . SR data was processed in Statistica 13, using one-way ANOVA with post hoc Tukey HSD test (TIBCO Software Inc., Palo Alto, CA, USA). 4.3. Sample Collecting and Amplicon Sequencing Coinciding with the SR measurements, sachets with straw were pulled one by one out of five experimental containers once every 1–2 weeks for the first 2 months and after that once every 3–4 weeks. Two experimental containers with straw sachets remained intact for all 6 months for the measurement of unaltered SR. The content of pulled-out sachets and the sample of control chernozem soil were stored in plastic tubes at −20 °C for the subsequent molecular analysis. Three to five replications for each time of sampling (thirty-six samples in total) and six replications for the soil sample were used for the DNA extraction with NucleoSpin ® Soil Kit (Macherey-Nagel GmbH & Co. KG, Düren, Germany) as described previously (2019). For the analysis of taxonomic dynamics of straw colonization, libraries of partial 16S rRNA gene (for bacteria and archaea) and of ITS2 (for fungi) were prepared and sequenced on the Illumina Miseq platform (Illumina, Inc., San Diego, CA, USA) as described previously . 4.4. Amplicon Data Analysis Data from the sequenced amplicon libraries were processed using the DADA2 pipeline in the R software environment v. 4.2 . Taxonomic identification was carried out using the Silva 138.1 database for 16S rRNA gene sequences and the Unite database for ITS2 sequences. The phylogenetic tree was constructed using the SEPP for 16S rRNA data and IQ-TREE 2.1.2 program for ITS2. Further processing was carried out using the phyloseq and ampvis2 packages. Alpha diversity was assessed by observed, Shannon , and inverted Simpson measures and MPD from picante , with the significance of mean differences between them calculated using ANOVA with Tukey HSD test . Beta diversity was accessed by NMDS with the Bray–Curtis distance matrix . The significance of differences between microbial communities was estimated by PERMANOVA from the adonis2 test in vegan . The WGCNA method after variance stabilizing transformation from DESeq2 was used to distinguish the microbial association into groups characteristic of different colonization phases. 4.5. Full Metagenome Sequencing and GH Gene Analysis To assess the composition of GH genes in the cellulolytic community the DNA isolated from the 3-month composting straw sample was used for the full metagenome sequencing using the MinION platform (Oxford Nanopore Technologies, Oxford, UK) as described previously . The resulting raw reads were base-called using guppy v. 6.0.6 with a high-accuracy model and clipping adapter sequences and were additionally checked for adapter sequences using porechop v. 0.2.4 , which were removed. Flye v 2.9 with a --meta flag was used to assemble the metagenome from the reads. Assembly was polished using a single run of medaka v. 1.5.0-rc.2 , which was used for subsequent steps. The assembly was annotated using eggNOG-mapper v. 2.1.9, using -m diamond and --dmnd_frameshift . The search for GH genes was conducted using hmm profiles from the PHAM database . The attribution of GH genes to different functional groups was performed using the CAZy database . 4.6. Real-Time PCR Analysis To evaluate dynamics of microbial content in the decomposing substrate real-time PCR of 16S rRNA fragment for bacteria and ITS2 fragment for fungi was conducted in triplicate for samples from day 28, 91 and 161 on the CFX96 Real-Time PCR Detection System (Bio-Rad, Germany) as described previously . The threshold cycle (C T ) data was converted to the number of ribosomal operons per 1 g of substrate. The significance of mean differences between different days of measurement was calculated using ANOVA with Tukey HSD test . To assess the dynamics of GH genes in the duration of the experiment, we constructed primers on the representative set of bacterial GH genes found in the metagenome and belonging to the genera detected by 16S rRNA gene sequencing (in total 23 primer pairs ( )). The real-time PCR was performed with these primers for samples from days 3, 28, 91 and 161 in triplicate. As an internal control 16S rRNA gene was used. The real-time data was processed using the comparative C T method (the 2 -ΔΔCT method) and clusterizied using WPGMA based on Euclidean distance in R. ANOVA with Tukey HSD test was applied on the log transformed values. The code is available at https://crabron.github.io/manuals/straw_wgcna.html , accessed on 9 December 2022.
The idea of the experiment was to model the dynamics of the formation of cellulolytic consortium from soil microbiota using straw as a substrate and study its colonization process. To achieve this sterilized straw in nylon sachets was submerged in the soil for six months. Fallow chernozem from the Agroecological Station “Kamennaya Steppe” of the Dokuchaev Research Agricultural Institute in the Voronezskaya area was chosen as a source of decomposing microbiota. This soil was removed from crop rotation more than 100 years ago. Before that it was used for sowing wheat. Its characteristics were: C total 4.86 ± 0.12%; pH salt 6.40 ± 0.08; N total 0.533 ± 0.02. As a source of lignocellulose biomass oat ( Avena ) straw was used with the following characteristics: ash 9.98 ± 2.04, N total 1.897 ± 0.012, C:N 23.5, water-soluble carbon 11.8 ± 0.50 g/kg. The experiment took place in 2018. The soil was ground and sieved at 5 mm, watered to 60% of the full moisture capacity, placed in the 2-liter plastic containers, and left to rest for 2 weeks to eliminate the effect of these manipulations on the CO 2 emission. Straw was shredded into 0–2 mm particles, 1 g portions were placed in the small nylon sachets and were subjected to E-beam sterilization. Wetted sachets (10 per container) were placed vertically in rows at a depth of 0.5–4 cm inside seven replicate containers with pre-prepared soil. Additionally, five replicate control containers with soil and without straw were laid at the same time. More detailed information about the experiment layout was described earlier . The humidity of the substrates was kept constant at 60% and the temperature was maintained at 28 ± 1 °C for the duration of the experiment.
To assess microbial activity linked to straw decomposition during the experiment, soil respiration (SR) in the experimental and control containers was measured weekly for 6 months using the conventional alkali absorption method . SR data was processed in Statistica 13, using one-way ANOVA with post hoc Tukey HSD test (TIBCO Software Inc., Palo Alto, CA, USA).
Coinciding with the SR measurements, sachets with straw were pulled one by one out of five experimental containers once every 1–2 weeks for the first 2 months and after that once every 3–4 weeks. Two experimental containers with straw sachets remained intact for all 6 months for the measurement of unaltered SR. The content of pulled-out sachets and the sample of control chernozem soil were stored in plastic tubes at −20 °C for the subsequent molecular analysis. Three to five replications for each time of sampling (thirty-six samples in total) and six replications for the soil sample were used for the DNA extraction with NucleoSpin ® Soil Kit (Macherey-Nagel GmbH & Co. KG, Düren, Germany) as described previously (2019). For the analysis of taxonomic dynamics of straw colonization, libraries of partial 16S rRNA gene (for bacteria and archaea) and of ITS2 (for fungi) were prepared and sequenced on the Illumina Miseq platform (Illumina, Inc., San Diego, CA, USA) as described previously .
Data from the sequenced amplicon libraries were processed using the DADA2 pipeline in the R software environment v. 4.2 . Taxonomic identification was carried out using the Silva 138.1 database for 16S rRNA gene sequences and the Unite database for ITS2 sequences. The phylogenetic tree was constructed using the SEPP for 16S rRNA data and IQ-TREE 2.1.2 program for ITS2. Further processing was carried out using the phyloseq and ampvis2 packages. Alpha diversity was assessed by observed, Shannon , and inverted Simpson measures and MPD from picante , with the significance of mean differences between them calculated using ANOVA with Tukey HSD test . Beta diversity was accessed by NMDS with the Bray–Curtis distance matrix . The significance of differences between microbial communities was estimated by PERMANOVA from the adonis2 test in vegan . The WGCNA method after variance stabilizing transformation from DESeq2 was used to distinguish the microbial association into groups characteristic of different colonization phases.
To assess the composition of GH genes in the cellulolytic community the DNA isolated from the 3-month composting straw sample was used for the full metagenome sequencing using the MinION platform (Oxford Nanopore Technologies, Oxford, UK) as described previously . The resulting raw reads were base-called using guppy v. 6.0.6 with a high-accuracy model and clipping adapter sequences and were additionally checked for adapter sequences using porechop v. 0.2.4 , which were removed. Flye v 2.9 with a --meta flag was used to assemble the metagenome from the reads. Assembly was polished using a single run of medaka v. 1.5.0-rc.2 , which was used for subsequent steps. The assembly was annotated using eggNOG-mapper v. 2.1.9, using -m diamond and --dmnd_frameshift . The search for GH genes was conducted using hmm profiles from the PHAM database . The attribution of GH genes to different functional groups was performed using the CAZy database .
To evaluate dynamics of microbial content in the decomposing substrate real-time PCR of 16S rRNA fragment for bacteria and ITS2 fragment for fungi was conducted in triplicate for samples from day 28, 91 and 161 on the CFX96 Real-Time PCR Detection System (Bio-Rad, Germany) as described previously . The threshold cycle (C T ) data was converted to the number of ribosomal operons per 1 g of substrate. The significance of mean differences between different days of measurement was calculated using ANOVA with Tukey HSD test . To assess the dynamics of GH genes in the duration of the experiment, we constructed primers on the representative set of bacterial GH genes found in the metagenome and belonging to the genera detected by 16S rRNA gene sequencing (in total 23 primer pairs ( )). The real-time PCR was performed with these primers for samples from days 3, 28, 91 and 161 in triplicate. As an internal control 16S rRNA gene was used. The real-time data was processed using the comparative C T method (the 2 -ΔΔCT method) and clusterizied using WPGMA based on Euclidean distance in R. ANOVA with Tukey HSD test was applied on the log transformed values. The code is available at https://crabron.github.io/manuals/straw_wgcna.html , accessed on 9 December 2022.
The novelty of this work consisted in the design of the experiment, which demonstrated the dynamics of microbial de novo colonization of straw substrate by soil microbiota during a 6-month period. Chernozem soil acted as a primary source of cellulolytic microorganisms, whose abundance strongly shifted during straw decomposition. The process of bacterial succession was accompanied with the decrease of microbial activity, but the increase in diversity of bacteria during the experiment. However, no increase in diversity was shown for the fungal community. Bacterial succession was divided into three phases, each characterized by a group of co-changing taxa. Genes from various GH families have been detected in the community since the first phase, with the largest increase in the middle phase. The changes of the selected GH genes representation between phases were explained by their taxonomic rather than functional attribution. The early phase was characterized by the appearance of representatives of Bacteroidetes and Alpha- and Gammaproteobacteria, which were shown as potential active decomposers of lignocellulose substrate but which disappear by the end of the phase. The middle phase can be considered the core of the emerging cellulolytic community, most of which contain GH genes, connected with cellulose decomposition, according to the metagenome sequencing, including bacteria ( Chitinophaga , Bacillus , Ohtaekwangia , Rhizobiaceae) and fungi ( Chloridium and Coniochaetaceae). The last phase marked the functional diversification of the community, when predatory microorganisms and bacteria involved in the cycling of other non-carbon substrates and released as a result of the activity of other microorganisms appear. All this may suggest that we should not consider cellulosic communities only as a source of GH-rich microorganisms; a comprehensive approach is required to construct stable and effective decomposing communities.
|
An Overview on the Use of miRNAs as Possible Forensic Biomarkers for the Diagnosis of Traumatic Brain Injury
|
bb6d1771-5a3f-44f4-bd8a-5eb0a1165715
|
10094817
|
Pathology[mh]
|
Due to the comprehensive information they can provide, molecular investigations are raising an increasing interest for their possible application to the forensic field. In this regard, several branches, such as forensic genomics, transcriptomics, and proteomics, have frequently proved useful as complementary approaches to routine procedures by helping define the genetic and biochemical bases underlying the cause of death . The ability to regulate physiological and pathological processes, together with the evidence of tissue-specific expressions, has opened up a wide range of possible applications for microRNA profiling in forensic diagnostics over the last decade. In RNA profiling, microRNAs (miRNAs) have been investigated lately as possible forensic biomarkers, mainly due to their small size, making them available to be recovered even in highly degraded samples . miRNAs are single-stranded, 18- to 24-nucleotide-long fragments whose contribution to the regulation of several biological processes is explicated at the post-transcriptional level by complementary binding sequences of mRNA so that gene expression can be silenced through either mRNA degradation or impaired protein synthesis . Indeed, since the first suggestion of their application for body fluid identification , the potential of miRNA profiling has been applied to other forensic issues, including post-mortem interval (PMI) estimation, the vitality of wounds, brain injury related to aging, drug abuse, and stroke, and drowning and sepsis diagnosis of death . To investigate the role of miRNAs in the pathophysiological events underlying traumatic brain injury (TBI), due to the similarities between human and animal function mechanisms, several researchers have engaged in the evaluation of the changes in their expression profile in experimental rodent models, by inducing TBI with either weight drop, fluid percussion, or controlled cortical impact . The results obtained on these pre-clinical models led to identifying groups of miRNAs that could be identified as TBI biomarkers per se, but also showed time- and severity-dependent expression changes (miR-132, miR-21 and miR-30a, miR-let-7i are among the most evaluated) . One of the most studied miRNAs in pre-clinical models is miR-21, whose up-regulation following TBI has been found to lead to the inhibition of apoptosis through targeting Bcl2, PTEN, and CDP4, as well as to the promotion of the outgrowth of neuronal axons by activating Ang-1/Tie-2 and Akt signaling, thus representing a potential mechanism by which the brain attempts to limit trauma-related neuronal destruction . The importance of such findings lies in translating them into human models. A few works have been recently produced in which miRNA expression profiles have been evaluated within a clinical context, this leading not only to the validation of specific miRNAs as TBI diagnostic markers but also to the correlation of their differential expression to severity and prognosis; in addition, the comprehension of the mechanisms by which they act within specific cellular pathways has also laid the basis for their possible use for therapeutic purposes. Significant evidence concerning the role of miRNAs as TBI markers in a forensic context is still lacking; in this light, the present narrative review aimed to explore the primary miRNAs involved in the mechanisms underlying TBI that could be considered for future evaluation as possible markers in a post mortem setting. With this purpose in mind, although enlightening, experimental investigations on animal models have been excluded; since the field of application we want to explore is the possible use of miRNAs in a forensic setting, we preferred to focus on works carried out on human samples in a clinical context, which might be proposed as an equivalent comparative system for future post mortem investigations. This research, performed using the PubMed and Scopus databases, was carried out according to the following inclusion criteria: English language; evaluation of miRNA expression on human samples; miRNA detection carried out on serum and plasma samples; and miRNAs that show an increase in cases of TBI. The articles selected included reviews, original articles, and prospective studies; references from the chosen articles were also evaluated for possible inclusion. Exclusion criteria were languages other than English and unavailability. Traumatic brain injury (TBI) is an impairment of cerebral functions due to direct or indirect mechanical insults, such as blasts, assaults, collisions, penetrative injuries, etc. It can be classified as a primary or secondary injury; the first one consists of the acute pathological changes induced by an external force at the (time) of the impact, which are mainly represented by internal hemorrhages, brain contusion, and axonal and vascular damage; the second one comprises all those pathological processes leading to further impairment of the neurological functions (oxidative stress, glutamate excitotoxicity, Ca 2+ overload, inflammatory response). With an annual incidence of around 50 million individuals, TBI represents one of the leading causes of disability worldwide since the related neurodegeneration increases the risk of developing chronic behavioral, cognitive, and physical impairment, as well as dementia, Alzheimer’s disease, and Parkinson’s disease . TBI severity can be classified as mild, moderate, or severe according to the Glasgow Coma Scale, a scoring system that consists of the evaluation of eye, motor, and verbal responses; a score of 13–15 corresponds to a mild TBI (mTBI); a 9–12 score corresponds to a moderate TBI; and a ≤8 score identifies a severe TBI. Other parameters contributing to better defining the severity and prognosis of TBI include clinical outcome, loss/alteration of consciousness, impaired mental state, and brain alterations detected by imaging . From a diagnostic point of view, imaging represents the gold standard. Computed tomography (CT) allows the identification of the injured cerebral areas and the extent of injury, thus guiding the most appropriate clinical or surgical management. In selected cases, MRI can also provide important information due to its better tissue contrast and increased sensitivity compared to CT. Nonetheless, mTBI is frequently associated with a lack of visible signs of head impact (bleeding, lacerations) on neuroimaging, thus making it difficult to either achieve a correct diagnosis or make a prognostic evaluation . Therefore, to overcome such limits, due to the knowledge of the pathophysiological mechanisms underlying TBI (release of cytokines, chemical mediators, and neurotransmitters; NO-dependent and Ca 2+ -dependent induction of apoptosis), several researchers have engaged over time in the identification of fluid biomarkers of injured axons, neuronal and glial cells, as well as inflammation biomarkers . Since almost 70% of miRNAs are reported to be expressed in the central nervous system—where they guide all stages of neurodevelopment and function—those involved in the molecular pathways activated by TBI have been gaining attention over the last decade as possible fluid biomarkers . The complex TBI-related molecular network in which soluble miRNAs are involved was carefully elucidated in a meta-analysis by Cente et al. , who made use of bioinformatic systems to identify the target genes for deregulated miRNAs isolated from plasma, serum, and cerebrospinal fluid, and the related signaling pathways, to link severe TBI to the pathogenesis of neurodegeneration. As a result, they found that the miRNA-targeted genes following severe TBI were involved in a great variety of processes, such as brain development, neurogenesis, myelinization, oligodendrocyte differentiation, regulation of synaptic plasticity, axon guidance, and regulation of inflammatory genes. As for the molecular pathways activated, one of the most significant was identified in the activation of BDNF/TrkB signaling downstream of the PI3K/AKT/MAPK pathway, with a neuroprotective role. Another dominant pathway, shared among all examined biofluids, was the one involving WNT/β-catenin and Notch signaling, with a neurodegenerative and reparative role. As expected, inflammatory pathways also figured among the dominant ones, with a significant involvement of IL-2, TGF-β, TLR, and integrin signaling, which were reported to play neuroprotective roles, regulate the pro-inflammatory activity of microglia and astrocytes, and suppress neuro-inflammation and blood–brain barrier disruption following TBI. Center et al. also found that most of the identified pathways were shared among the tested biofluids. Such evidence suggested the activation of a complex interaction between the brain, periphery, and immune system, thus confirming the role of miRNAs in the neuropathophysiology of TBI and their possible value as diagnostic and prognostic biomarkers . Extracellular vesicles (EVs) are subcellular particles playing an important role in intercellular communication. Their structure consists of a lipid bilayer membrane whose cargo—originated from the parental cell—is variably represented by lipids, proteins, DNA, mRNAs, miRNAs, non-coding RNAs, and organelles. Based on origin and dimensions, EVs can be classified into three main subtypes: exosomes (Exos, 10–100 nm diameter) that originate from the endosomal/multivesicular body (MVB) system and are stored inside the cell before their release; microvesicles (MVs, 100–1000 nm) that originate from a budding process of the plasma membrane; and apoptotic bodies (1–5 nm) that are generated from apoptotic cells and contain degradation products. All brain cells produce EVs, including neurons, astrocytes, microglia, and oligodendrocytes, but their content is highly variable depending on the external signals. Within a neurological context, they play a key role in modulating synaptic activity and neuronal communication, thus contributing to the pathogenesis of several neurodegenerative processes, including those underlying TBI. Among EVs, exosomes are of particular interest due to the ability of their miRNA cargo to modulate the gene expression pattern in recipient cells and induce systemic inflammation . Such a role is highlighted in the work of Long et al. , who carried out an experimental study in which TBI brain extracts were used to stimulate the production of exosomes in primary cultured astrocytes; once detected by immunofluorescence, exosomes were separated from astrocytes and added to primary cultured microglia. Subsequent immunofluorescence, qRT-PCR, and western blotting allowed not only confirmation of the exosome uptake by microglia but also the induction of a gene expression pattern consistent with a polarization of microglia into an M2 phenotype with an anti-inflammatory role. A subsequent miRNA microarray analysis of exosomes derived from astrocytes showed a significant up-regulation of 135 different miRNAs, out of which the most represented appeared to be miR-873a-5p, involved in the NF-κB signaling pathway leading to microglial activation. Based on these results, miR-873a-5p expression was evaluated by qRT-PCR in clinical specimens of damaged brain tissue obtained from 15 patients who underwent neurosurgery. Brain tissue samples were all collected three days after the TBI occurrence and consisted of either necrotic brain tissue or severe edema around the necrotic lesion. As a result, miR-873a-5p expression appeared significantly higher within necrotic areas than in the edematous areas. Subsequent in vitro experiments showed that miR-873a-5p suppresses pro-inflammatory factors and promotes the release of anti-inflammatory factors from the microglia by inhibiting both ERK phosphorylation and the NF-κB signaling pathway. Taken together, these findings suggested the role of miR-873a-5p as a possible TBI marker and, simultaneously, as a therapeutic target for improving cerebral injuries and impaired neurological functions. Vorn et al. assessed the expression levels of plasma exomiRNAs in 29 subjects with a history of chronic mTBI compared to 11 healthy controls. As a result, 25 different plasma exomiRNAs appeared differentially expressed in chronic mTBI compared to healthy controls; among them, only 4 were up-regulated (hsa-miR-520e, hsa-miR-499b-3p, hsa-miR-520b, hsa-miR-4488). Further bioinformatic investigations helped explicate the molecular mechanisms underlying the impairment of brain function after mTBI; indeed, 14 exomiRNAs were related to neurological disease, 23 were related to organismal injury and abnormalities, and 13 were related to psychological disease. Compared to the well-known protein TBI biomarkers, circulating miRNAs might be preferable due to their specific characteristics. First of all, their small sizes allow higher stability even in highly degraded samples. Secondly, the high tissue-specific expression gives them a higher sensitivity to the pathology examined. In addition, due to their action at a post-transcriptional level, they can be detected in the early stages of a disease, long before the effects of downstream protein expression are observed. For these reasons, several works have been produced to evaluate the different miRNA profiles in fluid samples of TBI models compared to controls . We present here a series of studies that aimed to find possible TBI miRNA markers in human fluid samples. Redell et al. examined miRNA expression in plasma samples obtained from 15 severe TBI patients compared to healthy volunteers. Preliminary microarray analysis revealed an up-regulation of 33 miRNAs and a down-regulation of 19 other miRNAs; five of these—miR-16, miR-20a, miR-92a, miR-638, and miR-765—were then selected using known expression patterns potentially involved in TBI pathophysiology. Subsequent qRT-PCR carried out on samples collected within 24 h from TBI allowed confirmation of the potential value of miR-16, miR-92a, and miR-765 as diagnostic biomarkers. The comparison between TBI patients and non-TBI orthopedic injury controls revealed that miR-16 and miR-92a expression was significantly lower in TBI patients than in orthopedic controls but increased dramatically in mild TBI patients compared to healthy volunteers; in addition, the same two miRNAs could not be used to differentiate between mild TBI patients and orthopedic controls. Bhomia et al. compared serum miRNA profiles between severe TBI patients (sTBI), mild to moderate TBI (mmTBI) patients, orthopedic injury controls, and healthy volunteers using qRT-PCR analysis. They observed an up-regulation of 39 miRNAs in mmTBI, 37 miRNAs in sTBI, and 33 miRNAs in an orthopedic injury group compared to control samples. Ten of these (miR-151-5p, miR-22 195, miR-20a, miR-30d, miR-328, miR-362-3p, miR-451, miR-486, miR-505*, and miR-92a), which appeared up-regulated in both mild/moderate and severe TBI patients, were selected for further validation in CSF, which showed that only 4 of them (miR-328, miR-362-3p, miR-451, and miR-486) were also up-regulated in this sample. Bhomia et al. further showed that eight of the ten miRNAs (miR-195, miR-30d, miR-451; miR-92a, miR-486, miR-505, miR-362, and miR-20a) were significantly increased in TBI patients with abnormal CT scans ( n = 12) compared to TBI patients with regular CT scans ( n = 19). In their work, Di Pietro et al. evaluated the circulating miRNAs in serum samples collected one day and 15 days after injury from five mild TBI and five severe TBI patients. They compared these to samples obtained from five healthy volunteers. Array analyses revealed significant changes in the differential expression of 10 miRNAs at one day and 15 miRNAs at 15 days for mTBI samples compared to healthy controls; among these, only five (miR-425-5p, miR-126*, miR-144*, miR-590-3p, and miR-624) were similar between the two time points in mTBI patients. Analog significant differential expression was observed for 19 miRNAs at one day and 22 miRNAs at 15 days in sTBI samples compared to healthy controls, particularly miR-21, miR-335, miR-190, miR-193a-5p, miR-144*, and miR-625*. To identify miRNA changes that could differentiate between mild and severe TBI, Di Pietro et al. selected those miRNAs appearing specifically altered in mTBI (miR-425-5p and miR-502) or sTBI (miR-21 and miR-335) patients and further validated them in a total of 120 patients divided into four groups: 30 mTBI, 30 sTBI, 30 extra-cranial injury (EC) controls, and 30 healthy patients; all serum samples were collected at T0, T4–12 h, T48–72 h, and 15 days. Among the examined miRNAs, only miR-21 was significantly up-regulated in sTBI patients compared to mTBI, EC controls, and healthy patients, and also correlated with age, CT lesions, and the Injury Severity Score. Qin et al. performed an initial microarray assay on plasma samples obtained from a total of 90 TBI patients, which, compared to a group of 30 healthy controls, showed an up-regulation of 65, 33, and 16 miRNAs, as well as a down-regulation of 29, 27, and 6 miRNAs, in patients with mild, moderate, and severe TBI, respectively. Among these, 13 miRNAs (7 up-regulated and 6 down-regulated) were common to all TBI groups. Subsequent pathway enrichment analyses showed that they shared commonly activated pathways, including the p53, mTOR, and TGF-β signaling pathway, SNARE interactions in vesicular transport, and the neurotrophin signaling pathway. Qin et al. then selected seven candidate miRNAs (miR-6867-5p, miR-3665, miR-328-5p, miR-762, miR-3195, miR-4669, and miR-2861) to be validated using qRT-PCR on a total of 100 separate and independent samples (25 in each group). All miRNAs appeared significantly up-regulated in all three TBI groups when compared to control samples, but the levels of miR-3195 and miR-328-5p were higher in the severe TBI group than in the mild and moderate TBI groups; in addition, miR-6867-5p levels in moderate and severe TBI groups were higher than those in mild TBI. Time-dependent miRNA expression in cases of severe TBI was evaluated by Ma et al. , who analyzed the serum obtained from a total of 20 patients. An RT-PCR analysis was performed, and changes in miRNA profiles were observed at 2, 12, 24, 48, and 72 h, with the following results: miR-18a, miR-203, miR-146a, miR-149, miR-23b, and miR-let-7b showed a >10-fold increase at 12 h compared to the 2 h time-point; all the previously cited except miR-18a showed the same magnitude of increase also at 24 h; miR-181d, miR-29a, and miR-18b showed a >5-fold increase at 48 h; miR-203, miR-146a, and miR-149 showed a >5-fold increase after 72 h. The use of bioinformatic tools helped determine that all the differentially expressed miRNAs were involved in pathways mainly related to cell proliferation, apoptosis, differentiation, inflammatory response, and collagen formation. Yan et al. evaluated differentially expressed miRNAs between mild, moderate, and severe TBI. An initial array to evaluate the levels of 754 serum miRNAs was performed in two pooled samples of 15 sTBI patients and 15 healthy controls, identifying 19 up-regulated miRNAs in the sTBI group with unfavorable outcomes compared to the control group. Next, 12 of these 19 miRNAs were selected to be validated by qRT-PCR in the serum samples of a larger cohort consisting of 81 sTBI patients, 81 mTBI patients, and 82 healthy controls. As a result, seven miRNAs (miR-103a-3p, miR-219a-5p, miR-302d-3p, miR-422a, miR-518f-3p, miR-520d-3p, and miR-627) appeared significantly up-regulated in both sTBI and mTBI patients compared to controls, and among these, miR-219a-5p, miR-422a, and miR-520d-3p levels appeared significantly higher in sTBI patients compared to mTBI patients. Yan et al. also investigated the correlation between the expression levels of the seven identified miRNAs with CT lesions; with this aim, 26 TBI patients without head CT and 136 TBI patients with lesions on head CT were analyzed. As a result, miR-103a-3p, miR-219a-5p, miR-302d-3p, miR-422a, and miR-627 levels were significantly higher in TBI patients with lesions than in those without lesions on head CT. Further bioinformatic analyses highlighted the role of up-regulated miR-219a-5p in the inhibition of CCNA2 and CACUL1 expression, thus contributing to the regulation of Akt/Foxo3a and p53/Bcl-2 signaling pathways in neuronal apoptosis activation. Lastly, Schindler et al. engaged in the evaluation of the levels of six miRNAs (miR-9-5p, miR-124-3p, miR-142-3p, miR-219a-5p, miR-338-3p, and miR-423-3p) in blood samples obtained within six hours after trauma from 33 patients, divided into three groups: severely injured patients without TBI (PT), those with severe TBI (PT + TBI), and patients with isolated TBI (isTBI). The results showed that miR-9-5p, miR-142-3p, and miR-219a-5p could not be detected in any group, while miR-338-3p levels did not show any change between all trauma groups. Interesting results were obtained for miR-423-3p, whose expression significantly increased in patients with severe isTBI, followed by PT + TBI, compared to PT patients without TBI; statistical analyses further showed that miR-423-3p levels positively correlate with TBI severity and risk of mortality, leading to the conclusion that it could represent a promising biomarker to identify severe isolated TBI. The main findings of the reviewed articles are summarized in . A correct post-mortem diagnosis of TBI needs a proper interpretation of the findings from different investigations, including external inspection, post-mortem radiology, autopsy, and histology as the routine gold standard procedures, and ante mortem data whenever available. Nonetheless, several conditions exist in which such approaches alone might not be conclusive, especially in cases in which other neuropathological conditions (ischemia, neurodegeneration, etc.) may have contributed to the death or in which some signs might be variably interpreted (e.g., in the absence of other traumatic signs, brain bleeding can either be interpreted as hypostatic or related to a traumatic subarachnoid or parenchymal hemorrhage); additional challenges are to be faced in cases of an advanced state of decomposition, where radiological, macroscopic, histological, and toxicological analyses cannot provide useful information . In these scenarios, implementing new, innovative approaches in the forensic field becomes essential. Among these, the evaluation of TBI-related changes in miRNAs expression profiles has lately been attracting attention. Circulating miRNAs are preferable to other protein biomarkers for their intrinsic characteristics: higher stability even in highly degraded samples due to their small sizes; high stability even at extreme temperatures, pH conditions, and chemical treatments; transportation within lipoprotein complexes and RNA-binding proteins in extracellular vesicles, which preserves them from endogenous RNase activity . In addition, unlike most proteins, brain-specific miRNAs can be easily evaluated in body fluids following injury since, due to their small sizes, they can cross the blood–brain barrier via microvesicles, exosomes, and lipoprotein carriers . Depending on the desired information, miRNA profiling can be performed relying on different approaches, each easy to perform and providing high sensitivity and reproducibility. These include microarray and/or NGS when simultaneous detection of hundreds of low copy number miRNAs is requested, or qRT-PCR when the aim is to analyze only a few selected miRNAs . Although such characteristics make miRNAs promising biomarkers in a post mortem setting and in a clinical one, some limits need to be considered. One of them is related to the lack of uniform and validated protocols on a global level, which prevents a comparison of the results between different laboratories. Another issue concerns the need for proper endogenous controls and normalization procedures that evaluate miRNA fold changes . Last, but not least, the impact of demographic characteristics, such as age, sex, and body mass index, in miRNA profile variability should be considered . Other limits are related to the samples used. Although plasma and serum, being easily accessible, represent the most suitable matrices for miRNA evaluation in a post mortem setting, the implementation of uniform procedures should be promoted to assess the sample quality before miRNA profiling: indeed, fibrinogen in plasma samples can be a source of contamination affecting the extraction quality, while serum clots can alter the actual miRNA profile . Both advantages and limits are summarized in . Although several TBI-related miRNAs have been identified that could be suitable for evaluation in a post mortem setting for diagnostic purposes, a great deal of work is yet to be performed to produce reliable protocols for obtaining uniform and comparable data between laboratories worldwide. Studies with larger samples are mandatory to confirm literature data regarding the role of miRNA in TBI diagnosis and to use these molecules as biomarkers in forensic investigation. The changing expression patterns of several miRNAs according to different neurological conditions highlight a possible role of miRNAs as a diagnostic tool in the medico-legal analysis to ascertain the cause and manner of death. The presented review explored the role of miRNAs in the biomolecular mechanisms involved in TBI pathophysiology and their possible application in postmortem diagnosis in a forensic setting. Since discovery of their critical role in almost every biological function, miRNA profiling has attracted increasing interest in the clinical context to uncover the pathophysiological mechanisms of several diseases and provide suggestions for new therapeutic strategies. Translating miRNA profiling techniques to the forensic context has also taken on the utmost importance since they provide useful information, especially when implemented together with routine forensic investigations within a multidisciplinary approach. Despite the great potential shown by these techniques, their active implementation in forensic investigations requires thorough validation studies to provide uniform and certain interpretation of the results and strong reliability for when the data are presented in a courtroom.
|
Synthetic Potential of Regio- and Stereoselective Ring Expansion Reactions of Six-Membered Carbo- and Heterocyclic Ring Systems: A Review
|
d7f0ebcb-5d35-43e0-9352-0737eebc9325
|
10094819
|
Pharmacology[mh]
|
Ring expansion reactions are among the most efficient reactions in chemistry that facilitate the synthesis of functionalized medium- to large-size ring systems. During recent years, efforts have been put forward by the scientists towards the synthesis of large-size carbo- as well as heterocycles via ring expansion reactions. Six-membered ring expansion reactions are of great importance for synthesizing 7- to 12- and higher-membered rings. The large-size carbo- and heterocyclic rings are of great interest for organic chemists due to their medicinal properties . Benazepril 3 is an angiotensin-converting enzyme inhibitor that has beneficial effects in the medication of diabetic kidney disease, hypertension and heart failure . Paullones are derived from indole [3,2-d]-1-benzoazepin-2-one, which prevents cyclin-dependent kinases (CDKs). Kenpaullone and alsterpaullone ( 1a,b ) have demonstrated the anticancer activity and CDK inhibitory activity . Metapramine 2 and opipramol 4 are used to cure depression . Some of the large-size carbocyclic/heterocyclic compounds exhibit various biological activities, such as antitumor activity , inhibition of HIV-1 replication , antinociception activity , antiepileptic activity , inhibition of α -glucosidase and inhibition of cytidine deaminase. Herein, we report the recent synthetic approaches used for the formation of large-size ring lactams, lactones, O, N, S-containing heterocyclic rings, substituted azulenes and azepine derivatives. A number of rearrangements, including Aza–Claisen rearrangement , Beckmann rearrangement , Tiffeneau–Demjanov rearrangement , Schmidt rearrangement and cycloaddition reactions (10 + 4) and (6 + 6) , are discussed in the current review. 2.1. Ring Expansion Reactions for the Synthesis of Lactams The effect of nonbonded, attractive cation–π interactions on the reactions of hydroxyl alkyl azides and cyclic ketones has been explored via computational and experimental sources. An attractive asymmetric pathway to lactams from the hydroxyalkyl azide-mediated ring expansion reaction of cyclic ketones was reported by Katz et al. The reaction of 2-aryl-1,3-hydroxyalkyl azide 6 with 4- tert -butylcyclohexanone 5 afforded the diastereomers of lactams 7a and 7b in maximum yields (99%) with 43:57 diastereomeric ratio. The solvent study revealed that out of different solvents such as toluene, diglyme, CH 2 Cl 2 , (C 5 H 5 ) 2 O and n -C 5 H 12 , CH 2 Cl 2 provided a 99% yield. The reaction proceeded well via asymmetric ring expansion reaction using BF 3 . OEt 2 , followed by the hydrolysis of intermediate iminium ethers using aqueous potassium hydroxide . Beckmann rearrangement is the rearrangement of an oxime functional group to an amide, which usually leads to ring expansion. Hadimani et al. reported the formation of lactam by ring expansion of acyloxy nitroso compound. 1-Nitrosocyclohexyl acetate 8 was treated with triphenylphosphine (TPP) in the presence of benzene, which gave seven-membered Beckmann rearrangement ring 9 in a 55% yield, along with triphenylphosphine oxide. Consequently, the intermediate 9 underwent acid-catalyzed hydrolysis (HCl 1M) to produce caprolactam 10 . Medium and macrocyclic ring scaffolds have contributed significantly to medicinal chemistry . Unsworth and his colleagues developed the synthetic approach for a wide range of medium-size lactams possessing medicinal lead-like properties. The lead-likeness of the medium-size lactams is analyzed through lead-likeness and molecular analysis (LLAMA). The 10-membered ring lactam 12 was obtained in an 84% (maximum) yield by ring expansion of β -keto ester 11 with acid chloride in the presence of MgCl 2 , pyiridine and dichloromethane, followed by the addition of piperidine and CH 2 Cl 2 . Room temperature was maintained for 1–2 h to carry out the reaction . After achieving satisfactory results, authors applied this protocol on different substrates, affording substituted lactams in a 12–84% yield. This methodology demonstrates the worth of ring expansion reactions for synthesizing the medicinally appropriate compounds . The effective and flexible protocol for the generation of benzannulated medium ring lactams from bicyclic scaffolds through oxidative dearomatization ring-expanding rearomatization reaction (ODRE) was outlined by Guney et al. A wide range of benzannulated lactams was attained in a 53–89% yield range by maintaining temperatures at 0–24 °C. However, the excellent yield (89%) of benzannulated lactam 14 was achieved by the reaction of chromanone-derived compound 13 with bis(trifluoroacetoxy)iodobenzene (PhI(TFA) 2 ) in nitromethane as solvent for 0.5–2 h . In contrast, the halobenzocyclooctanols and halobenzosuberanols afforded 10- and 11-membered benzannulated lactams in 70–90% yield . Xu et al. scrutinized the catalyst-free electrochemical ring expansion reaction for synthesizing the synthetically challenging annulated medium-size lactams via carbon–carbon bond cleavage. Highly functionalized medium-size lactam 16 was obtained in a 98% yield when CF 3 -aryl-substituted substrate 15 was electrolyzed in the electrolytic solution of n -Bu 4 NBF 4 in CH 3 CN/H 2 O without using any catalysts or bases at 25 °C . The electrolytic reaction shows compatibility with the electron-rich and electron-poor groups at either para- or metapositions of aniline to obtain a wide range of products (30–98%) . The compounds containing the benzazepines motif have exhibited considerable importance in medicinal chemistry. For example, benazepril is an angiotensin-converting enzyme inhibitor utilized to cure heart failure, hypertension and diabetic kidney disease. Paullones are the derivatives of indole [3,2-d]-1-benzoazepin-2-one that inhibit cyclin-dependent kinases (CDKs). Zarraga and coworkers performed the reaction of oxime 17 with POCl 3 in order to obtain polyheterocyclic lactam. The reaction mixture was refluxed at 70 °C in dry THF for 1.5 h, giving a 67% yield of polyheterocyclic lactam 18 . The corresponding lactam was achieved by a tandem process involving Beckmann rearrangement/expansion and subsequent cyclization of nitrogen atom and primary halide fragment of 17 . 2.2. Ring Expansion Reactions for the Synthesis of Lactones 1,3-Benzoxazine, a class of cyclic compounds, has obtained appreciable attention because of its ring-opening polymerization that synthesizes polymers with excellent characteristics, such as thermal stability , high mechanical strength and durability under humid environment . Endo and coworkers outlined the synthesis of eight-membered ring lactone by using benzoxazine as starting material. The solution of 1,3-benzoxazine 19 in acetic anhydride was refluxed using p -toluenesulfonic acid (1 mol%) as catalyst. The reaction mixture underwent ring expansion to afford eight-membered ring lactone 20 , containing a tertiary amine group in the ring, in a maximum of 70% yield with 90% conversion in 1 h . For this purpose, the temperature outside the container was maintained at 150 °C . Shintani et al. in 2011 reported a ring expansion reaction by treating valerolactones 21 and aziridines 22 via (6 + 3) cyclization reaction, which resulted in the synthesis of nine atoms containing cyclic rings, referred to as “azlactones” 23 . These azlactones are regarded as 1,4-oxanones, and they cannot be obtained simply by other reported methodologies in the past . 2.3. Ring Expansion Reactions for the Formation of Azulenes Derivatives Azulenes are composed of a five-membered ring fused to a seven-membered ring and are important colored compounds that are used for indicators , dyes and imaging. Furthermore, these compounds show prominent biological and electronic properties. Gorgensen and colleagues synthesized a series of polysubstituted azulene derivatives through organocatalytic (10 + 4) cycloaddition of indene-2-carbaldehyde and chromen-4-one. The electron-deficient substituents provided azulenes in excellent yields (up to 98%), while electron-rich substituents gave different azulenes in the 45–83% yield range. The substituted azulene 27 was obtained in a 98% yield by reaction of substituted indene-2-carbaldehyde 24 with substituted chromen-4-one 25 in the presence of 15 mol% pyrrolidine 26 , 15 mol% p -MeOBzOH, CDCl 3 and molecular sieves at 40 °C. . 2.4. Ring Expansion Reactions for the Synthesis of Azepine Derivatives The azepines class of heterocycles has gained tremendous importance due to wide-ranging biological activities. For example, some compounds are used to cure cardiovascular diseases and some are the inhibitors of cytidine deaminase . A general approach towards the formation of 1,3-diazepin-2-one derivatives by the ring expansion reaction of pyrimidines using different nucleophiles was developed by Shutalev and coworkers. They synthesized different diazepinone derivatives in good-to-excellent yields under different reaction conditions. The reaction of 28a,b (where R = CH 3 & Ph respectively) and sodium diethyl malonate proceeded in THF at room temperature gives diazepinones 29a and 29b in 90% and 92% yields, respectively. 4-Mesyloxymethyl-pyrimidines 28a,b reacted with potassium phthalimide 30 and refluxed in acetonitrile (MeCN) to afford the desired products 31a, b in 96% yields. The pyrimidines 28a,b refluxed in tetrahydrofuran in the presence of succinimide 32 and sodium hydride provided the diazepinones 33a and 33b in 93% and 92% yields, respectively. The 4-unsbstituted diazepinones 34a and 34b were attained in 66% and 72% yields, respectively, on refluxing 28a,b with NaBH 4 in THF. . Fesenko et al. reported the nucleophilic-dependent diastereoselectivity of the ring expansion of pyrimidines with nucleophiles to synthesize polysubstituted1,3-diazepine. Several functionalized polysubstituted 1,3-diazepin-2-ones were obtained in excellent yields (80–97%) by the reaction of pyrimidines with different nucleophiles. The treatment of tetrahydropyrimidine 35 with MeONa in methanol gave a single cis-diastereomer 36 in a 93% yield. The reaction of 35 with EtONa in ethanol synthesized a diastereoselective (cis/trans = 93/7) 4-ethoxydiazepine 37 in a 97% yield. When 35 was treated with sodium cyanide (NaCN) in DMSO at room temperature, a mixture of cis- and transdiastereomers (cis/trans = 94/6) 38 was obtained in an 80% yield. The transdiastereomer 39 was obtained in an 83% yield by the reaction of 35 with PhSNa, synthesized by the reaction of PhSH with sodium hydride in THF. The pyrimidine 35 and potassium phthalimide 30 were refluxed in acetonitrile, and the full trans-isomer 40 was obtained in a 95% yield . Tetrahydrodiazepinones are essential biologically active heterocycles that have been rarely studied in the past due to the scarcity of their precursors. In order to obtain medicinally important diazepinones, Fesenko and coworker carried out the synthesis of diversely substituted 1,2,3,4-tetrahydropyrimidinones by reacting α-tosyl ketone enolates with N -[(2-benzoyloxy-1-tosyl)ethyl]urea. The synthesized tetrahydropyrimidinones 41 were then subjected to treatment with different nucleophiles, i.e., potassium phthalimide, sodium salt of diethyl malonate, sodium cyanide and sodium thiophenolate, to furnish tetrahydro-1,3-diazepinones 42, 43, 44 and 45 in good yields . Dihydropyrimidinones are easily accessible organic compounds; however, their extended seven-membered rings are rarely obtained. In order to obtain seven-membered cyclic system of Biginelli compounds, substituted Biginelli compounds 46 were treated with nucleophilic substances which first underwent removal of the proton, resulting in the generation of bicyclic intermediate as a result of subsequent nucleophilic substitution reaction. The intermediate then underwent ring expansion to give diazepinones 47 . Shutalev et al., in 2008, employed easily accessible pyrimidinone derivative 48 to treat it with non-nucleophilic strong base, i.e., NaH, which furnished tricyclic bis-diazepinone derivative 49 as a result of tandem reaction . Shutalev and coworkers outlined the synthetic protocol of diazepine by ring expansion of acyl-substituted pyrimidines. A series of diazepines was attained in a 43–96% yield range by applying different reaction conditions. The maximum (96%) yield of 1,3-diazepine 51 was obtained on refluxing 5-acyl-substituted pyrimidine 50a with potassium phthalimide 30 in acetonitrile solvent for 1 h. Another diazepine derivative 52 was obtained in a 96% yield by the reaction of 50b with thiophenol and sodium hydride in tetrahydrofuran (THF) for 2 h . This protocol has a wide substrate scope, allowing the formation of substituted diazepines (80–96%) . 2.5. Ring Expansion Reaction for the Synthesis of Tropone Derivatives Tropones have gained significant importance owing to their unique structural features, along with their remarkable biological activities. They are found to be essential structural motifs of various naturally occurring and medicinally important organic compounds. The ring elongation reaction of alkynyl quinols results in the synthesis of tropone derivatives. Considering the wide applications of tropone derivatives, Zhao et al., in 2015, reported the synthesis of tropone derivatives by employing the ring elongation reaction of alkynyl quinols in the presence of various gold catalysts and a diverse range of solvents. The optimization reactions revealed that the utilization of PPh 3 AuNTf catalyst and DCE solvent resulted in high yields of target molecules . In 2021, Du et al. reported the ring extension reaction of alkynyl quinols by employing a highly efficient catalyst, i.e., an MCM-41-based confined complex of gold carrying altered benzylidene phosphine. MCM-41-BnPh 2 -AuX is regarded as a productive catalyst owing to its large surface area, resistance to high temperature and facile recycling properties. Taking into account the usage of this effective catalyst, Du et al. synthesized and then incorporated this catalyst into the expansion reaction of alkynyl quinols. The reaction conditions were optimized by using various solvents and by varying X in gold complex catalyst. However, the highest yield (91%) was obtained by using DCE as solvent, along with the incorporation of NTf 2 in MCM-41-BnPh 2 -AuX . 2.6. Miscellaneous Ring Expansion Reactions Benzoazocine-containing drugs and alkaloids exhibit numerous biological activities, e.g., antitumor activity , antinociception activity , inhibition of α -glucosidase , acetylcholinesterase and inhibition of HIV-1 replication . Voskressensky and coworkers reported the synthesis of benzoazocines via the ring enlargement of substituted tetrahydroisoquinolines and activated alkynes. A total of 37–83% of substituted tetrahydrobenzoazocines was achieved by the reaction of different tetrahydroisoquinolines with alkynes using acetonitrile as solvent. The tetrahydroisoquinoline 57 was refluxed with alkyne 58 in acetonitrile, giving corresponding tetrahydrobenzoazocines 59 in an 83% yield. In addition, the maximum (83%) yield of 61 was obtained by the reaction of 57 and 60 . Acetonitrile was selected as the appropriate solvent for this reaction . The seven-membered carbocyclic rings are mostly found in potent medicinal scaffolds such as thapsigargins , colchicines , hinokitiol , etc. The ring expansion reaction for the formation of dihydrotropones conducted by the rearrangement of spirocyclohexadienones was performed by Guillou and coworkers. To achieve the targeted dihydrotropone 63 , authors utilized spirocyclichexadienone 62 as the starting precursor. For this protocol, MeONa in methanol was selected as an effective reaction media. An excellent yield (75%) of dihydrotropone 63 was attained by the rearrangement of spirocyclichexadienone 62 within 30 min by keeping the temperature at 40 °C . Free radical ring expansion reactions of five and six-membered rings have extended the use of radical chemistry for framing the large cyclic compounds. Xu et al. reported the ring expansion reaction promoted by free radical and associated cyclization of 1,3-diketone derivatives. A 68–72% yield range of targeted nine-membered ring compounds was attained under optimized reaction conditions (Bu 3 SnH, AIBN, C 6 H 6 and 80 °C). The compound 56 was refluxed in benzene using tri- n -butyltin hydride (Bu 3 SnH) and azobisisobutyronitrile (AIBN) as catalyst. As a result, targeted derivative 57 was achieved as the major product, along with direct reduction product 58 in a 12% yield by maintaining the temperature at 80 °C . Seven-membered carbocycles gained substantial attention due to their remarkable biological activities. Maruoka and colleagues made a novel contribution to the stereoselective construction of seven-membered ring compounds through direct Tiffeneau–Demjanov-type ring enlargement. The authors performed the reaction between t -butyl α -benzyl diazoacetate 67 and 4-phenylcyclohexanone 68 . Resultantly, the corresponding product with one all-carbon quaternary center was obtained in a 95% yield. To facilitate this approach, optimized reaction conditions involved 20 mol% BF 3 . OEt 2 (catalyst), CH 2 Cl 2 (solvent), 30 min and −78 °C temperature . The 40–95% yield range proved the effectivity of this methodology . Ballesteros-Garrido et al. reported the synthesis of medium-size rings by using rhodium-catalyzed ring expansion reaction in one pot. Several functionalized nine-membered ring compounds were isolated in the low-to-high yields (31–85%) by the reaction of α -diazodicarbonyls with cyclic acetals. α -diazo β -ketoester 70 underwent reaction with 1,3,5-trioxane 71 at 60 °C using (Rh 2 (OAc) 4 ) (1 mol%) as catalyst . Toluene proved to be a suitable solvent for this reaction. Consequently, an 85% yield of corresponding product 72 was achieved as a single product . Seven-membered ring compounds possessing enormous biological activities have attracted the attention of chemists to develop synthetic routes for their preparation. Silva et al. reported the metal-free, versatile and efficient method for synthesizing hetero- and carbocycles containing seven-membered rings. 1-Vinylcycloalkanol derivative 73 underwent hydroxy(tosyloxy)iodobenzene (HTIB)-promoted ring expansion. As a result, a mixture of O-heterocycles 74 , 75 and 76 was attained in a 72–88% yield range . The reaction parameters for the corresponding reaction include HTIB, a hypervalent iodine reagent and methanol as solvent. The easy availability of starting materials is the salient feature of this protocol . Seven-membered heterocycles are core structures of numerous naturally occurring biologically potent organic compounds. Cancer is the most prevent and deadly disease in the recent era, and continuous efforts are in progress to devise anticancerous agents . For example, theaflavin is known to inhibit the proliferation of cancerous cells . Similarly, TAK-779 is highly effective against human immunodeficiency virus . There have been numerous reports concerning the synthesis of seven-membered cyclic organic compounds via utilization of titanium, palladium or mercury metals. Silva et al., in 2008, carried out the ring expansion reaction of six-membered cyclic rings by devising a metal-free synthetic approach. For this purpose, they treated silyl or methyl ether of vinylcycloalkanols in the presence of HTIB (PhI(OH)OTs) and methanol to furnish lactone-based cyclic compounds 77 . Similarly, the reaction to silyl enol ether of vinylcycloalkanols 73 with iodine’s reagent by using methanol as solvent in the presence of p -TsOH led to the synthesis of methoxy ketone 74 in a 61% yield. However, the reaction of vinylcycloalkanol with 2.5 equivqlents of HTIB in the presence of methanol achieved dimethoxyketone 75 in a 75% yield . Oxygen, sulfur and nitrogen constituting heterocycles are diversely and abundantly found in various pharmaceutically important organic compounds. For example, bauhiniastatin is a naturally occurring seven-membered cyclic ring that has been found to be a potent cytotoxic agent. Taking into account the unparalleled pharmacological applications of seven-membered cyclic rings, Khan et al. in 2021 attempted their synthesis via ring expansion reaction by utilizing an iodine reagent, i.e., HTIB (PhI(OH)OTs), as catalyst. HTIB (PhI(OH)OTs) is regarded as Koser’s reagent and has been found to be highly effective as an alternative to expensive metal catalysts that were previously employed for the synthesis of seven-membered heterocyclic rings. Khan et al. reacted ketone 78 with potassium tert -butoxide, Ph 3 PCH 3 Br and diethyl ether, which resulted in the synthesis of chromane intermediate 79 . The synthesized chromane was then treated with HTIB catalyst in the presence of acetonitrile and water as solvent to synthesize seven-membered heterocycle 80 in a good yield . Seven-membered sulfonamides possess enormous biological activities, such as the inhibition of HIV-1 protease calcium-sensing receptor agonists and apical sodium-dependent bile acid transporter (ASBT) inhibition . Moreover, they act as synthetic intermediates for synthesizing biologically active scaffolds. Xia et al. reported the reactions of N -sulfonylimines with diazomethane under metal-free conditions providing enesulfonamides in a 45–99% yield range. The reaction worked efficiently by using Cs 2 CO 3 as base and 1,4-dioxane at 60 °C. A maximum (99%) yield of enesulfonamide 83 was obtained under argon atmosphere . Tricyclic dibenzazepines and dibenzoxepines derivatives are therapeutic agents and possess a wide range of pharmaceutical properties, for example, metapramine and opipramol display analgesic and anxiolytic properties, respectively. In addition, oxcarbazepine, an antiepileptic drug (AED) , is used as monotherapy for medication of partial seizures in adults. Mancheno and coworkers reported the facile and effortless way to synthesize dibenzazepines and dibenzoxepines oxidative C-H bond rearrangement and ring expansion using copper catalyst. The tricyclic product 86 was obtained in a 75% yield by the reaction of 84 with TMS-CHN 2 85 in acetonitrile. The reaction was completed in 18 h using 10 mol% Cu(OTf) 2 as catalyst, 30 mol% of 2,2-bipyridine as ligand and Ph(CO 2 ) 2 as additive . Electron-donating (OMe, Ph, Me) and electron-withdrawing (F, Br) substituents were well-tolerated in the corresponding reaction, giving 55–74% yields of respective products. The substitution of O by N-Ph group gave a 55% yield, while the substitution of sulfur gave a 36% yield of the corresponding product . The insertion of isolated alkynes into the carbon–carbon σ-bond of unstrained cyclic β -dicarbonyl compounds without use of transition metals is outlined by Zhou et al. The reaction of different alkynes and dicarbonyl compounds consequently gave corresponding ring expansion products in a 49–86% yield range. It was noted that electron-donating and -withdrawing substituents on aryl groups of alkynes gave respective products in good yields. The insertion of alkynes 87 into cyclic β -diketone 88 was facilitated under optimized reaction conditions (Cs 2 CO 3 , DMAc, 80 °C). The 2.0 equiv cesium carbonate (Cs 2 CO 3 ) was used as base and dimethylacetamide (DMAc) as an effective solvent, as compared with DMF, DMSO and toluene, to attain the required product 89 in an 86% yield . Due to mild reaction conditions and easily accessible starting precursors, this approach is applicable to organic synthesis . The existence of 8- to 12-membered medium-ring heterocycles in natural products demonstrates several biological activities. Clayden and coworkers reported an effective method to synthesize medium (8- to 12-membered) benzo-fused nitrogen-containing heterocyclic rings via n → n + 3 ring expansion of metalated urea. A series of substituted nine-membered benzodiazonines in a 50–90% yield proved the substrate scope and efficacy of this protocol. The reaction was processed in tetrahydrofuran in the presence of lithium diisopropylamide (LDA) as base and N , N ’ -dimethylpropylideneurea (DMPU) as additive. By maintaining temperature at −78 °C, product 91 was attained through migratory ring expansion reaction of urea derivative in 1–16 h. The investigation of bases such as LDA and sec-BuLi was conducted. LDA was selected as a suitable base for this reaction . The pyrrole-annulated medium-sized rings have attracted considerable attention, as they are found in pharmaceutical agents, naturally occurring products and biologically active compounds. For example, some of the compounds show useful effects on the central nervous system and some are agonists of the farnesoid X receptor . The enantioselective construction of pyrrole-annulated seven-membered rings by allylic dearomatization or retro-Mannich/hydrolysis via iridium catalyst was reported by Huang et al. The targeted product 95 was obtained in a 77% yield and an enantioselectivity value of 99% ee by the reaction of 92 with 2 mol% of iridium catalyst (Ir(COD)Cl) 2 and 4 mol% of ligand 93 . The reaction worked well by utilizing 100 mol% (Cs 2 CO 3 ) as base in THF through an intermediate 94 , followed by the Boc protection of amino group by using Boc 2 O and triethylamine (Et 3 N) . The reaction was completed in 4–12 h by keeping the temperature at 50 °C . Oxetanes are abundantly present in several medicinal drugs . Shibata and coworkers reported a simple method for the formation of 12-membered trifluoromethyl heterocycles through nondecarboxylative palladium-catalyzed (6 + 6) annulation of 6-membered trifluoromethylbenzo[d][1,3]oxazinones and several vinyl oxetanes. Vinyl oxetanes containing electron-rich (4-OCH 3 , 2-OMe, 4-CH 3 ) and electron-poor (4-Cl, 4-Br, 4-F) substituents on aryl ring gave respective products in good-to-excellent yields (66–93%). The impact of different solvents (THF, CH 2 Cl 2 , 1,4-dioxane, MeCN and toluene) was examined, and results declare that 1,4-dioxane gave the maximum yield, as compared with other solvents. Highly functionalized trifluoromethyl 12-membered ring heterocycle 98 was obtained in an excellent yield (93%) by the reaction of 96 with substituted vinyl oxetane 97 using 5 mol% Pd(PPh 3 ) 4 via Pd-catalyzed non-decarboxylative (6 + 6) annulation . Most of the pharmaceutically important organic compounds contain saturated cyclic rings having nitrogen and oxygen atoms. The synthesis of heterocycles containing more carbon atoms is a very interesting task, which can be easily accomplished by employing the ring closing strategy on open-chain starting materials. However, Grant et al., in 2008, reported the synthesis of these heterocycles by subjecting lactams or lactones to the ring-expansion reaction. For this purpose, they treated N -sulfonyllactams 99 and easily available lactones with tert -butyl propiolate 100 in the presence of n -butyl lithium and boron triflouride diethyl etherate, which gave acyclic adducts 101 in good yields. The synthesized adducts were then made to react with pyridinium acetate, which resulted in the synthesis of six or seven carbon-containing cyclized ethers or amines upon cyclization 102 . The higher carbon-containing cyclic rings are synthesized as a result of ring elongation methodology, which proceeds via nucleophile-catalyzed reaction . Selenium-incorporated heterocycles have gained considerable significance due to their significant biological applications and extraordinary chemical properties . However, there are few reports concerning the synthesis of selenium-based cyclic rings. Thus, in 2008, Sashida et al. reacted tertrahydroselenopyranone 103 with diversely substituted ethynyl lithiums to synthesize selenium-incorporated eight-membered heterocyclic ring 105 . The reaction was carried out under different conditions, and the highest yield (95%) was obtained when the reaction between the two substrates was carried out by substituting R = Ph and by employing THF as solvent at −40 °C in the presence of sulfuric acid as proton source . A wide variety of medicinal drugs constitute fluorine constituting organic compounds, which emphasizes the synthesis of fluorine-based organic molecules. According to the reported data by the FDA in 2018, almost 47% percent of approved drugs include fluorine/CF 3 in their structural formula . Considering the wide applicability of fluorine-based organic molecules, Uno et al. carried out the ring extension reaction of trifluoromethyl-benzo [1,3]-oxazinones by employing an optically active ligand as catalyst. In this regard, they reacted [1,3]-oxazinones with vinyl-substituted ethylene carbonates in the presence of palladium acetate and ( R )-Tol-BINAP, which led to the synthesis of a nine-membered heterocycle, i.e., dextrorotary and levorotatory mirror images of trifluoromethyl-substituted [1,5]-oxazonines in high enantiomeric excess (up to 98% ee). In the first step, double removal of carboxylic acid results in ring elongation to furnish phenyl sunstituted (R )- 108 . The nonreactive starting material in the second cycle is then converted to ( S)- 108 in 88% yield with -89%ee without utilizing another optically active ligand. . However, thiophene substituted ethylene carbonate resulted in -98% ee of target molecule in 81% yield . The effect of nonbonded, attractive cation–π interactions on the reactions of hydroxyl alkyl azides and cyclic ketones has been explored via computational and experimental sources. An attractive asymmetric pathway to lactams from the hydroxyalkyl azide-mediated ring expansion reaction of cyclic ketones was reported by Katz et al. The reaction of 2-aryl-1,3-hydroxyalkyl azide 6 with 4- tert -butylcyclohexanone 5 afforded the diastereomers of lactams 7a and 7b in maximum yields (99%) with 43:57 diastereomeric ratio. The solvent study revealed that out of different solvents such as toluene, diglyme, CH 2 Cl 2 , (C 5 H 5 ) 2 O and n -C 5 H 12 , CH 2 Cl 2 provided a 99% yield. The reaction proceeded well via asymmetric ring expansion reaction using BF 3 . OEt 2 , followed by the hydrolysis of intermediate iminium ethers using aqueous potassium hydroxide . Beckmann rearrangement is the rearrangement of an oxime functional group to an amide, which usually leads to ring expansion. Hadimani et al. reported the formation of lactam by ring expansion of acyloxy nitroso compound. 1-Nitrosocyclohexyl acetate 8 was treated with triphenylphosphine (TPP) in the presence of benzene, which gave seven-membered Beckmann rearrangement ring 9 in a 55% yield, along with triphenylphosphine oxide. Consequently, the intermediate 9 underwent acid-catalyzed hydrolysis (HCl 1M) to produce caprolactam 10 . Medium and macrocyclic ring scaffolds have contributed significantly to medicinal chemistry . Unsworth and his colleagues developed the synthetic approach for a wide range of medium-size lactams possessing medicinal lead-like properties. The lead-likeness of the medium-size lactams is analyzed through lead-likeness and molecular analysis (LLAMA). The 10-membered ring lactam 12 was obtained in an 84% (maximum) yield by ring expansion of β -keto ester 11 with acid chloride in the presence of MgCl 2 , pyiridine and dichloromethane, followed by the addition of piperidine and CH 2 Cl 2 . Room temperature was maintained for 1–2 h to carry out the reaction . After achieving satisfactory results, authors applied this protocol on different substrates, affording substituted lactams in a 12–84% yield. This methodology demonstrates the worth of ring expansion reactions for synthesizing the medicinally appropriate compounds . The effective and flexible protocol for the generation of benzannulated medium ring lactams from bicyclic scaffolds through oxidative dearomatization ring-expanding rearomatization reaction (ODRE) was outlined by Guney et al. A wide range of benzannulated lactams was attained in a 53–89% yield range by maintaining temperatures at 0–24 °C. However, the excellent yield (89%) of benzannulated lactam 14 was achieved by the reaction of chromanone-derived compound 13 with bis(trifluoroacetoxy)iodobenzene (PhI(TFA) 2 ) in nitromethane as solvent for 0.5–2 h . In contrast, the halobenzocyclooctanols and halobenzosuberanols afforded 10- and 11-membered benzannulated lactams in 70–90% yield . Xu et al. scrutinized the catalyst-free electrochemical ring expansion reaction for synthesizing the synthetically challenging annulated medium-size lactams via carbon–carbon bond cleavage. Highly functionalized medium-size lactam 16 was obtained in a 98% yield when CF 3 -aryl-substituted substrate 15 was electrolyzed in the electrolytic solution of n -Bu 4 NBF 4 in CH 3 CN/H 2 O without using any catalysts or bases at 25 °C . The electrolytic reaction shows compatibility with the electron-rich and electron-poor groups at either para- or metapositions of aniline to obtain a wide range of products (30–98%) . The compounds containing the benzazepines motif have exhibited considerable importance in medicinal chemistry. For example, benazepril is an angiotensin-converting enzyme inhibitor utilized to cure heart failure, hypertension and diabetic kidney disease. Paullones are the derivatives of indole [3,2-d]-1-benzoazepin-2-one that inhibit cyclin-dependent kinases (CDKs). Zarraga and coworkers performed the reaction of oxime 17 with POCl 3 in order to obtain polyheterocyclic lactam. The reaction mixture was refluxed at 70 °C in dry THF for 1.5 h, giving a 67% yield of polyheterocyclic lactam 18 . The corresponding lactam was achieved by a tandem process involving Beckmann rearrangement/expansion and subsequent cyclization of nitrogen atom and primary halide fragment of 17 . 1,3-Benzoxazine, a class of cyclic compounds, has obtained appreciable attention because of its ring-opening polymerization that synthesizes polymers with excellent characteristics, such as thermal stability , high mechanical strength and durability under humid environment . Endo and coworkers outlined the synthesis of eight-membered ring lactone by using benzoxazine as starting material. The solution of 1,3-benzoxazine 19 in acetic anhydride was refluxed using p -toluenesulfonic acid (1 mol%) as catalyst. The reaction mixture underwent ring expansion to afford eight-membered ring lactone 20 , containing a tertiary amine group in the ring, in a maximum of 70% yield with 90% conversion in 1 h . For this purpose, the temperature outside the container was maintained at 150 °C . Shintani et al. in 2011 reported a ring expansion reaction by treating valerolactones 21 and aziridines 22 via (6 + 3) cyclization reaction, which resulted in the synthesis of nine atoms containing cyclic rings, referred to as “azlactones” 23 . These azlactones are regarded as 1,4-oxanones, and they cannot be obtained simply by other reported methodologies in the past . Azulenes are composed of a five-membered ring fused to a seven-membered ring and are important colored compounds that are used for indicators , dyes and imaging. Furthermore, these compounds show prominent biological and electronic properties. Gorgensen and colleagues synthesized a series of polysubstituted azulene derivatives through organocatalytic (10 + 4) cycloaddition of indene-2-carbaldehyde and chromen-4-one. The electron-deficient substituents provided azulenes in excellent yields (up to 98%), while electron-rich substituents gave different azulenes in the 45–83% yield range. The substituted azulene 27 was obtained in a 98% yield by reaction of substituted indene-2-carbaldehyde 24 with substituted chromen-4-one 25 in the presence of 15 mol% pyrrolidine 26 , 15 mol% p -MeOBzOH, CDCl 3 and molecular sieves at 40 °C. . The azepines class of heterocycles has gained tremendous importance due to wide-ranging biological activities. For example, some compounds are used to cure cardiovascular diseases and some are the inhibitors of cytidine deaminase . A general approach towards the formation of 1,3-diazepin-2-one derivatives by the ring expansion reaction of pyrimidines using different nucleophiles was developed by Shutalev and coworkers. They synthesized different diazepinone derivatives in good-to-excellent yields under different reaction conditions. The reaction of 28a,b (where R = CH 3 & Ph respectively) and sodium diethyl malonate proceeded in THF at room temperature gives diazepinones 29a and 29b in 90% and 92% yields, respectively. 4-Mesyloxymethyl-pyrimidines 28a,b reacted with potassium phthalimide 30 and refluxed in acetonitrile (MeCN) to afford the desired products 31a, b in 96% yields. The pyrimidines 28a,b refluxed in tetrahydrofuran in the presence of succinimide 32 and sodium hydride provided the diazepinones 33a and 33b in 93% and 92% yields, respectively. The 4-unsbstituted diazepinones 34a and 34b were attained in 66% and 72% yields, respectively, on refluxing 28a,b with NaBH 4 in THF. . Fesenko et al. reported the nucleophilic-dependent diastereoselectivity of the ring expansion of pyrimidines with nucleophiles to synthesize polysubstituted1,3-diazepine. Several functionalized polysubstituted 1,3-diazepin-2-ones were obtained in excellent yields (80–97%) by the reaction of pyrimidines with different nucleophiles. The treatment of tetrahydropyrimidine 35 with MeONa in methanol gave a single cis-diastereomer 36 in a 93% yield. The reaction of 35 with EtONa in ethanol synthesized a diastereoselective (cis/trans = 93/7) 4-ethoxydiazepine 37 in a 97% yield. When 35 was treated with sodium cyanide (NaCN) in DMSO at room temperature, a mixture of cis- and transdiastereomers (cis/trans = 94/6) 38 was obtained in an 80% yield. The transdiastereomer 39 was obtained in an 83% yield by the reaction of 35 with PhSNa, synthesized by the reaction of PhSH with sodium hydride in THF. The pyrimidine 35 and potassium phthalimide 30 were refluxed in acetonitrile, and the full trans-isomer 40 was obtained in a 95% yield . Tetrahydrodiazepinones are essential biologically active heterocycles that have been rarely studied in the past due to the scarcity of their precursors. In order to obtain medicinally important diazepinones, Fesenko and coworker carried out the synthesis of diversely substituted 1,2,3,4-tetrahydropyrimidinones by reacting α-tosyl ketone enolates with N -[(2-benzoyloxy-1-tosyl)ethyl]urea. The synthesized tetrahydropyrimidinones 41 were then subjected to treatment with different nucleophiles, i.e., potassium phthalimide, sodium salt of diethyl malonate, sodium cyanide and sodium thiophenolate, to furnish tetrahydro-1,3-diazepinones 42, 43, 44 and 45 in good yields . Dihydropyrimidinones are easily accessible organic compounds; however, their extended seven-membered rings are rarely obtained. In order to obtain seven-membered cyclic system of Biginelli compounds, substituted Biginelli compounds 46 were treated with nucleophilic substances which first underwent removal of the proton, resulting in the generation of bicyclic intermediate as a result of subsequent nucleophilic substitution reaction. The intermediate then underwent ring expansion to give diazepinones 47 . Shutalev et al., in 2008, employed easily accessible pyrimidinone derivative 48 to treat it with non-nucleophilic strong base, i.e., NaH, which furnished tricyclic bis-diazepinone derivative 49 as a result of tandem reaction . Shutalev and coworkers outlined the synthetic protocol of diazepine by ring expansion of acyl-substituted pyrimidines. A series of diazepines was attained in a 43–96% yield range by applying different reaction conditions. The maximum (96%) yield of 1,3-diazepine 51 was obtained on refluxing 5-acyl-substituted pyrimidine 50a with potassium phthalimide 30 in acetonitrile solvent for 1 h. Another diazepine derivative 52 was obtained in a 96% yield by the reaction of 50b with thiophenol and sodium hydride in tetrahydrofuran (THF) for 2 h . This protocol has a wide substrate scope, allowing the formation of substituted diazepines (80–96%) . Tropones have gained significant importance owing to their unique structural features, along with their remarkable biological activities. They are found to be essential structural motifs of various naturally occurring and medicinally important organic compounds. The ring elongation reaction of alkynyl quinols results in the synthesis of tropone derivatives. Considering the wide applications of tropone derivatives, Zhao et al., in 2015, reported the synthesis of tropone derivatives by employing the ring elongation reaction of alkynyl quinols in the presence of various gold catalysts and a diverse range of solvents. The optimization reactions revealed that the utilization of PPh 3 AuNTf catalyst and DCE solvent resulted in high yields of target molecules . In 2021, Du et al. reported the ring extension reaction of alkynyl quinols by employing a highly efficient catalyst, i.e., an MCM-41-based confined complex of gold carrying altered benzylidene phosphine. MCM-41-BnPh 2 -AuX is regarded as a productive catalyst owing to its large surface area, resistance to high temperature and facile recycling properties. Taking into account the usage of this effective catalyst, Du et al. synthesized and then incorporated this catalyst into the expansion reaction of alkynyl quinols. The reaction conditions were optimized by using various solvents and by varying X in gold complex catalyst. However, the highest yield (91%) was obtained by using DCE as solvent, along with the incorporation of NTf 2 in MCM-41-BnPh 2 -AuX . Benzoazocine-containing drugs and alkaloids exhibit numerous biological activities, e.g., antitumor activity , antinociception activity , inhibition of α -glucosidase , acetylcholinesterase and inhibition of HIV-1 replication . Voskressensky and coworkers reported the synthesis of benzoazocines via the ring enlargement of substituted tetrahydroisoquinolines and activated alkynes. A total of 37–83% of substituted tetrahydrobenzoazocines was achieved by the reaction of different tetrahydroisoquinolines with alkynes using acetonitrile as solvent. The tetrahydroisoquinoline 57 was refluxed with alkyne 58 in acetonitrile, giving corresponding tetrahydrobenzoazocines 59 in an 83% yield. In addition, the maximum (83%) yield of 61 was obtained by the reaction of 57 and 60 . Acetonitrile was selected as the appropriate solvent for this reaction . The seven-membered carbocyclic rings are mostly found in potent medicinal scaffolds such as thapsigargins , colchicines , hinokitiol , etc. The ring expansion reaction for the formation of dihydrotropones conducted by the rearrangement of spirocyclohexadienones was performed by Guillou and coworkers. To achieve the targeted dihydrotropone 63 , authors utilized spirocyclichexadienone 62 as the starting precursor. For this protocol, MeONa in methanol was selected as an effective reaction media. An excellent yield (75%) of dihydrotropone 63 was attained by the rearrangement of spirocyclichexadienone 62 within 30 min by keeping the temperature at 40 °C . Free radical ring expansion reactions of five and six-membered rings have extended the use of radical chemistry for framing the large cyclic compounds. Xu et al. reported the ring expansion reaction promoted by free radical and associated cyclization of 1,3-diketone derivatives. A 68–72% yield range of targeted nine-membered ring compounds was attained under optimized reaction conditions (Bu 3 SnH, AIBN, C 6 H 6 and 80 °C). The compound 56 was refluxed in benzene using tri- n -butyltin hydride (Bu 3 SnH) and azobisisobutyronitrile (AIBN) as catalyst. As a result, targeted derivative 57 was achieved as the major product, along with direct reduction product 58 in a 12% yield by maintaining the temperature at 80 °C . Seven-membered carbocycles gained substantial attention due to their remarkable biological activities. Maruoka and colleagues made a novel contribution to the stereoselective construction of seven-membered ring compounds through direct Tiffeneau–Demjanov-type ring enlargement. The authors performed the reaction between t -butyl α -benzyl diazoacetate 67 and 4-phenylcyclohexanone 68 . Resultantly, the corresponding product with one all-carbon quaternary center was obtained in a 95% yield. To facilitate this approach, optimized reaction conditions involved 20 mol% BF 3 . OEt 2 (catalyst), CH 2 Cl 2 (solvent), 30 min and −78 °C temperature . The 40–95% yield range proved the effectivity of this methodology . Ballesteros-Garrido et al. reported the synthesis of medium-size rings by using rhodium-catalyzed ring expansion reaction in one pot. Several functionalized nine-membered ring compounds were isolated in the low-to-high yields (31–85%) by the reaction of α -diazodicarbonyls with cyclic acetals. α -diazo β -ketoester 70 underwent reaction with 1,3,5-trioxane 71 at 60 °C using (Rh 2 (OAc) 4 ) (1 mol%) as catalyst . Toluene proved to be a suitable solvent for this reaction. Consequently, an 85% yield of corresponding product 72 was achieved as a single product . Seven-membered ring compounds possessing enormous biological activities have attracted the attention of chemists to develop synthetic routes for their preparation. Silva et al. reported the metal-free, versatile and efficient method for synthesizing hetero- and carbocycles containing seven-membered rings. 1-Vinylcycloalkanol derivative 73 underwent hydroxy(tosyloxy)iodobenzene (HTIB)-promoted ring expansion. As a result, a mixture of O-heterocycles 74 , 75 and 76 was attained in a 72–88% yield range . The reaction parameters for the corresponding reaction include HTIB, a hypervalent iodine reagent and methanol as solvent. The easy availability of starting materials is the salient feature of this protocol . Seven-membered heterocycles are core structures of numerous naturally occurring biologically potent organic compounds. Cancer is the most prevent and deadly disease in the recent era, and continuous efforts are in progress to devise anticancerous agents . For example, theaflavin is known to inhibit the proliferation of cancerous cells . Similarly, TAK-779 is highly effective against human immunodeficiency virus . There have been numerous reports concerning the synthesis of seven-membered cyclic organic compounds via utilization of titanium, palladium or mercury metals. Silva et al., in 2008, carried out the ring expansion reaction of six-membered cyclic rings by devising a metal-free synthetic approach. For this purpose, they treated silyl or methyl ether of vinylcycloalkanols in the presence of HTIB (PhI(OH)OTs) and methanol to furnish lactone-based cyclic compounds 77 . Similarly, the reaction to silyl enol ether of vinylcycloalkanols 73 with iodine’s reagent by using methanol as solvent in the presence of p -TsOH led to the synthesis of methoxy ketone 74 in a 61% yield. However, the reaction of vinylcycloalkanol with 2.5 equivqlents of HTIB in the presence of methanol achieved dimethoxyketone 75 in a 75% yield . Oxygen, sulfur and nitrogen constituting heterocycles are diversely and abundantly found in various pharmaceutically important organic compounds. For example, bauhiniastatin is a naturally occurring seven-membered cyclic ring that has been found to be a potent cytotoxic agent. Taking into account the unparalleled pharmacological applications of seven-membered cyclic rings, Khan et al. in 2021 attempted their synthesis via ring expansion reaction by utilizing an iodine reagent, i.e., HTIB (PhI(OH)OTs), as catalyst. HTIB (PhI(OH)OTs) is regarded as Koser’s reagent and has been found to be highly effective as an alternative to expensive metal catalysts that were previously employed for the synthesis of seven-membered heterocyclic rings. Khan et al. reacted ketone 78 with potassium tert -butoxide, Ph 3 PCH 3 Br and diethyl ether, which resulted in the synthesis of chromane intermediate 79 . The synthesized chromane was then treated with HTIB catalyst in the presence of acetonitrile and water as solvent to synthesize seven-membered heterocycle 80 in a good yield . Seven-membered sulfonamides possess enormous biological activities, such as the inhibition of HIV-1 protease calcium-sensing receptor agonists and apical sodium-dependent bile acid transporter (ASBT) inhibition . Moreover, they act as synthetic intermediates for synthesizing biologically active scaffolds. Xia et al. reported the reactions of N -sulfonylimines with diazomethane under metal-free conditions providing enesulfonamides in a 45–99% yield range. The reaction worked efficiently by using Cs 2 CO 3 as base and 1,4-dioxane at 60 °C. A maximum (99%) yield of enesulfonamide 83 was obtained under argon atmosphere . Tricyclic dibenzazepines and dibenzoxepines derivatives are therapeutic agents and possess a wide range of pharmaceutical properties, for example, metapramine and opipramol display analgesic and anxiolytic properties, respectively. In addition, oxcarbazepine, an antiepileptic drug (AED) , is used as monotherapy for medication of partial seizures in adults. Mancheno and coworkers reported the facile and effortless way to synthesize dibenzazepines and dibenzoxepines oxidative C-H bond rearrangement and ring expansion using copper catalyst. The tricyclic product 86 was obtained in a 75% yield by the reaction of 84 with TMS-CHN 2 85 in acetonitrile. The reaction was completed in 18 h using 10 mol% Cu(OTf) 2 as catalyst, 30 mol% of 2,2-bipyridine as ligand and Ph(CO 2 ) 2 as additive . Electron-donating (OMe, Ph, Me) and electron-withdrawing (F, Br) substituents were well-tolerated in the corresponding reaction, giving 55–74% yields of respective products. The substitution of O by N-Ph group gave a 55% yield, while the substitution of sulfur gave a 36% yield of the corresponding product . The insertion of isolated alkynes into the carbon–carbon σ-bond of unstrained cyclic β -dicarbonyl compounds without use of transition metals is outlined by Zhou et al. The reaction of different alkynes and dicarbonyl compounds consequently gave corresponding ring expansion products in a 49–86% yield range. It was noted that electron-donating and -withdrawing substituents on aryl groups of alkynes gave respective products in good yields. The insertion of alkynes 87 into cyclic β -diketone 88 was facilitated under optimized reaction conditions (Cs 2 CO 3 , DMAc, 80 °C). The 2.0 equiv cesium carbonate (Cs 2 CO 3 ) was used as base and dimethylacetamide (DMAc) as an effective solvent, as compared with DMF, DMSO and toluene, to attain the required product 89 in an 86% yield . Due to mild reaction conditions and easily accessible starting precursors, this approach is applicable to organic synthesis . The existence of 8- to 12-membered medium-ring heterocycles in natural products demonstrates several biological activities. Clayden and coworkers reported an effective method to synthesize medium (8- to 12-membered) benzo-fused nitrogen-containing heterocyclic rings via n → n + 3 ring expansion of metalated urea. A series of substituted nine-membered benzodiazonines in a 50–90% yield proved the substrate scope and efficacy of this protocol. The reaction was processed in tetrahydrofuran in the presence of lithium diisopropylamide (LDA) as base and N , N ’ -dimethylpropylideneurea (DMPU) as additive. By maintaining temperature at −78 °C, product 91 was attained through migratory ring expansion reaction of urea derivative in 1–16 h. The investigation of bases such as LDA and sec-BuLi was conducted. LDA was selected as a suitable base for this reaction . The pyrrole-annulated medium-sized rings have attracted considerable attention, as they are found in pharmaceutical agents, naturally occurring products and biologically active compounds. For example, some of the compounds show useful effects on the central nervous system and some are agonists of the farnesoid X receptor . The enantioselective construction of pyrrole-annulated seven-membered rings by allylic dearomatization or retro-Mannich/hydrolysis via iridium catalyst was reported by Huang et al. The targeted product 95 was obtained in a 77% yield and an enantioselectivity value of 99% ee by the reaction of 92 with 2 mol% of iridium catalyst (Ir(COD)Cl) 2 and 4 mol% of ligand 93 . The reaction worked well by utilizing 100 mol% (Cs 2 CO 3 ) as base in THF through an intermediate 94 , followed by the Boc protection of amino group by using Boc 2 O and triethylamine (Et 3 N) . The reaction was completed in 4–12 h by keeping the temperature at 50 °C . Oxetanes are abundantly present in several medicinal drugs . Shibata and coworkers reported a simple method for the formation of 12-membered trifluoromethyl heterocycles through nondecarboxylative palladium-catalyzed (6 + 6) annulation of 6-membered trifluoromethylbenzo[d][1,3]oxazinones and several vinyl oxetanes. Vinyl oxetanes containing electron-rich (4-OCH 3 , 2-OMe, 4-CH 3 ) and electron-poor (4-Cl, 4-Br, 4-F) substituents on aryl ring gave respective products in good-to-excellent yields (66–93%). The impact of different solvents (THF, CH 2 Cl 2 , 1,4-dioxane, MeCN and toluene) was examined, and results declare that 1,4-dioxane gave the maximum yield, as compared with other solvents. Highly functionalized trifluoromethyl 12-membered ring heterocycle 98 was obtained in an excellent yield (93%) by the reaction of 96 with substituted vinyl oxetane 97 using 5 mol% Pd(PPh 3 ) 4 via Pd-catalyzed non-decarboxylative (6 + 6) annulation . Most of the pharmaceutically important organic compounds contain saturated cyclic rings having nitrogen and oxygen atoms. The synthesis of heterocycles containing more carbon atoms is a very interesting task, which can be easily accomplished by employing the ring closing strategy on open-chain starting materials. However, Grant et al., in 2008, reported the synthesis of these heterocycles by subjecting lactams or lactones to the ring-expansion reaction. For this purpose, they treated N -sulfonyllactams 99 and easily available lactones with tert -butyl propiolate 100 in the presence of n -butyl lithium and boron triflouride diethyl etherate, which gave acyclic adducts 101 in good yields. The synthesized adducts were then made to react with pyridinium acetate, which resulted in the synthesis of six or seven carbon-containing cyclized ethers or amines upon cyclization 102 . The higher carbon-containing cyclic rings are synthesized as a result of ring elongation methodology, which proceeds via nucleophile-catalyzed reaction . Selenium-incorporated heterocycles have gained considerable significance due to their significant biological applications and extraordinary chemical properties . However, there are few reports concerning the synthesis of selenium-based cyclic rings. Thus, in 2008, Sashida et al. reacted tertrahydroselenopyranone 103 with diversely substituted ethynyl lithiums to synthesize selenium-incorporated eight-membered heterocyclic ring 105 . The reaction was carried out under different conditions, and the highest yield (95%) was obtained when the reaction between the two substrates was carried out by substituting R = Ph and by employing THF as solvent at −40 °C in the presence of sulfuric acid as proton source . A wide variety of medicinal drugs constitute fluorine constituting organic compounds, which emphasizes the synthesis of fluorine-based organic molecules. According to the reported data by the FDA in 2018, almost 47% percent of approved drugs include fluorine/CF 3 in their structural formula . Considering the wide applicability of fluorine-based organic molecules, Uno et al. carried out the ring extension reaction of trifluoromethyl-benzo [1,3]-oxazinones by employing an optically active ligand as catalyst. In this regard, they reacted [1,3]-oxazinones with vinyl-substituted ethylene carbonates in the presence of palladium acetate and ( R )-Tol-BINAP, which led to the synthesis of a nine-membered heterocycle, i.e., dextrorotary and levorotatory mirror images of trifluoromethyl-substituted [1,5]-oxazonines in high enantiomeric excess (up to 98% ee). In the first step, double removal of carboxylic acid results in ring elongation to furnish phenyl sunstituted (R )- 108 . The nonreactive starting material in the second cycle is then converted to ( S)- 108 in 88% yield with -89%ee without utilizing another optically active ligand. . However, thiophene substituted ethylene carbonate resulted in -98% ee of target molecule in 81% yield . In summary, the synthesis of biologically active carbocyclic/heterocyclic compounds has been achieved by numerous pathways. The current review article outlines a wide range of reactions to demonstrate the synthetic importance of functionalized carbocyclic/heterocyclic molecules formed via six-membered ring expansion reactions that covers the literature reported from 2017 to 2022. The use of catalysts and ligands and the effect of different substituents on product yields while conducting reactions were also illustrated in this review. Moreover, such ring expansion reactions will provide meticulous guidelines for synthetic and pharmaceutical chemists to yield valuable ring expansion products.
|
Partners in Postmortem Interval Estimation: X-ray Diffraction and Fourier Transform Spectroscopy
|
7cbfc137-4f1f-48a3-b799-bc31f1278894
|
10094861
|
Forensic Medicine[mh]
|
Estimating the postmortem interval (PMI)—the time elapsed between the physiological death of an organism and its examination —is one of the most relevant challenges in forensic anthropology and forensic medicine, and it has important legal implications. There are many approaches for estimating PMI in the early stages of decomposition . In contrast, there are currently no reliable, accurate methods for determining PMI in the later stages of decomposition. The early postmortem period ranges from 3 to 72 h after death and is usually estimated using the classical postmortem changes (rigor mortis, livor mortis and algor mortis). The late postmortem interval begins with body tissue disintegration and is mainly described as decomposition or putrefaction, adipocere formation, mummification or skeletonization . PMI estimation has been studied in a variety of samples at different stages of decomposition, including vitreous humor , blood , soft tissues and skeletal remains . A range of techniques have been applied to improve this estimate, such as thermogravimetry , micro-computed tomography , near-infrared spectroscopy , among others. New molecular approaches, such as proteomics , molecular and microbiome techniques , are increasingly being considered for PMI estimation. X-ray diffraction (XRD) and vibrational spectroscopy are starting to be applied to the study of PMI. XRD in particular has been studied in human bone, with the finding that the degree of crystallinity of hydroxyapatite increases in the postmortem degradation process, showing a larger crystal size . Individually, XRD and FTIR methods have several advantages: they are relatively easy to perform; they are fast, inexpensive techniques to study solids with a non-invasive method; and they only require a small amount of sample for detection (<1 mg) in a totally non-destructive manner . This means they could be easily applied in forensic scenarios for estimating PMI. The late PMI has been studied using Fourier transform infrared (FTIR) spectroscopy in conjunction with chemometrics methods , using Raman spectroscopy and chemometrics , applying NIR spectroscopy (NIRONE ® Sensor X; Spectral Engines, 61449, Steinbach, Germany) , combining XRD and biochemical analyses in human bones and using bacterial community succession in human bone , among others. In addition to the aforementioned techniques, other studies have focused on applying Fourier transform infrared spectroscopy (FTIR) in order to distinguish postmortem changes in various types of samples, such as human or animal tissue [ , , ]. FTIR has also been used for postmortem studies in bone samples . These works show that it is possible to distinguish between ancient and recent human bone by the crystallinity index and carbonate/phosphate index obtained from the FTIR spectra . Other studies show that the spectra yielded by FTIR varied significantly in accordance with PMI, identifying the alteration of Amide I as the parameter that best estimated PMI in bone . Furthermore, an increase in PMI leads to an increase in both the amount of Type-B carbonate and the carbonate/phosphate ratio but a decrease in the crystallinity index. The crystallinity index and carbonate ratio have therefore been identified as the most suitable FTIR and XRD indices for estimating PMI, especially in bones from females . Despite extensive studies assessing compositional degradation in bone during PMI, there is a gap of knowledge regarding teeth, with very few studies on tooth composition using FTIR or XRD. In general, these few works focus on chemical differences in the composition of the different parts of the tooth in humans and animals [ , , , , ]. However, teeth are a very valuable sample for PMI studies in forensic and anthropological practice, as they are highly resistant to postmortem degradation, putrefaction, aging and external environmental factors . Moreover, they are commonly found in forensic, anthropological or archaeological settings . Their high inorganic composition makes teeth the hardest structures in the human body, and their location in the jawbone provides additional protection from putrefaction compared to bones . Tooth degradation encompasses a series of structural and molecular changes, which lead to a decrease in complex organic molecules and a proportional increase in the inorganic matrix, which require in-depth study. However, to the best of our knowledge, there are no studies analyzing PMI by FTIR or XRD in human dental samples, meaning postmortem changes in tooth structure and composition remain poorly understood. This study therefore aims to assess whether there is a correlation between changes in the mineral composition of human teeth and the estimation of PMI by FTIR and XRD techniques. To this end, we employed XRD and Rietveld refinement for quantitative analysis of crystallographic parameters, along with attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy for quantification of relevant parameters that reflect the relative content of tooth compounds, such as phosphates ( v 1 v 3 PO 4 3− ), carbonates ( v 2 CO 3 2− ) and amides I ( v C=O), in human teeth with different PMIs.
2.1. Analysis of Crystallographic Indices in Teeth by XRD to Determine PMI The crystallographic indices of hydroxyapatite-(CaOH) found in the teeth were quantified by XRD and Rietveld refinement. Representative XRD spectra are shown in A. We observed significant overall differences in the different PMI groups. Changes over the PMI were observed in two crystallographic indices: crystallinity (F 3,36 = 5.120, p = 0.0047) and crystal size (F 3,36 = 4.356, p = 0.0102). The crystallinity index showed an increase with increasing PMI, displaying significant differences between 0 years versus 25 ( p < 0.05) and 50 ( p < 0.01) years of PMI ( B). The crystal size index showed an increase with increasing PMI, with differences between 0 years compared to the rest of the groups (10, 25 and 50 years of PMI) ( p < 0.01) ( C). Analyzing crystallinity and crystal size by gender ( D) showed changes across PMI with significant overall differences in the different groups (F 3,32 = 4.819, p = 0.0070 and F 3,32 = 4.624, p = 0.0085, respectively). Although an interaction between gender and PMI was not found ( E), a specific increase in crystallinity index and crystal size was observed in the 50-year PMI group in females ( p < 0.05). 2.2. Analysis of Tooth Composition by ATR-FTIR to Determine PMI The relative contents of tooth compounds containing amides, carbonates and phosphates were identified by ATR-FTIR in all teeth of the different PMI groups. A represents the mean values of the ATR-FTIR spectra from each PMI group. The relative content of tooth compounds was quantified to calculate relevant parameters related to tooth composition, such as mineral-to-matrix (M/M) ratio, carbonate-to-phosphate ratio (C/P ratio), mineral maturity and collagen maturity. An overall effect of PMI was found on the M/M ratio (W 3,17.99 = 11.75, p = 0.0002) and C/P ratio (W 3,16.64 = 7.71, p = 0.0019) ( B,C) but not on mineral maturity and collagen maturity ( D,E). The M/M ratio showed an increase with increasing PMI, exhibiting significant differences between 25 years versus 0 ( p < 0.05) and 10 ( p < 0.05) years of PMI, and between 50 years versus 0 ( p < 0.01), 10 ( p < 0.01) and 25 ( p < 0.05) years of PMI ( B). The C/P ratio showed a decrease with increasing PMI, exhibiting significant differences between 25 years versus 0 ( p < 0.05) and 10 ( p < 0.05) years of PMI ( C). Analyzing the M/M ratio, C/P ratio, mineral maturity and collagen maturity by gender ( F–I) revealed an overall effect of PMI on the M/M ratio (F 3,32 = 14.83, p < 0.0001) ( F) but not on the C/P ratio, mineral maturity and collagen maturity ( G–I). Interestingly, an increase in the M/M ratio was found with increasing PMI, with significant differences between 50 years versus 0 ( p < 0.05) and 10 ( p < 0.05) years of PMI in males and between 50 years versus 0 ( p < 0.01), 10 ( p < 0.01) and 25 ( p < 0.05) years of PMI in females ( F). In contrast, the C/P ratio showed a specific decrease with increasing PMI in females, with significant differences between 50 years versus 0 ( p < 0.05) and 10 ( p < 0.05) years of PMI ( G). 2.3. Correlation Analysis between PMI, Crystallographic Indices and Tooth Compounds The relationship between the different variables in all samples was explored using the Pearson r correlation. The analysis revealed that PMI correlated positively with crystallinity (rho = 0.50, p = 0.001), crystal size (rho = 0.36, p = 0.021) and the M/M ratio (rho = 0.73, p < 0.001). However, a negative correlation was found between PMI and the C/P ratio (rho = −0.33, p = 0.039), C/P ratio and crystallinity (rho = −0.51, p = 0.001) and C/P ratio and M/M ratio (rho = −0.65, p < 0.001) ( A and ). A correlation analysis by gender revealed that PMI correlated positively with the M/M ratio (rho = 0.64, p = 0.002) in males ( B and ). The M/M ratio also correlated negatively with C/P ratio (rho = −0.49, p = 0.028) and mineral maturity (rho = −0.45, p = 0.0047) in males ( B and ). In females, PMI correlated positively with crystallinity (rho = 0.60, p = 0.005), crystal size (rho = 0.48, p = 0.034) and M/M ratio (rho = 0.84, p < 0.001) ( C and ). In contrast, the C/P ratio correlated negatively with PMI (rho = −0.76, p < 0.001), crystallinity (rho = −0.76, p < 0.001) and M/M ratio (rho = −0.91, p < 0.001) in females ( C and ). 2.4. Changes in Crystallographic Indices and Tooth Compound Parameters Explain PMI After assessing the factor analysis and correlations, the selected model contained seven independent variables: the M/M ratio, C/P ratio, age of individuals, crystallinity, crystal size, mineral maturity and collagen maturity. The variances of the seven variables chosen were accounted for by three extracted components. According to this model, the three components (factors) together explained 70.6% of the variance associated with PMI (0, 10, 25 and 50 years) in human teeth ( and ). The most influential factor (component 1) explained 34.1% of total variance. The C/P ratio, M/M ratio, age of individuals and crystallinity had high factor loadings (−0.789, 0.875, 0.666 and 0.578, respectively). Component 2 explained 19.8% of total variance, with age of individuals, crystallinity and crystal size having high factor loadings (−0.401, 0.557 and 0.840, respectively). Component 3 explained 16.7% of total variance. Mineral maturity and collagen maturity have high factor loadings (0.797 and 0.735, respectively). 2.5. Predicting PMI from Crystallographic Indices and Tooth Compound Parameters After evaluating binary logistic regression, the results showed that crystallinity, crystal size, M/M ratio and C/P ratio had the strongest association with PMI. The predicting variables selected for the 10 years of PMI were crystallinity, crystal size, M/M ratio and C/P ratio ( A). The overall success rate (percentage of correct predictions) for 10 years of PMI was 87% (ROC-AUC = 0.96, 95% CI = 0.87–1.04, Sensitivity = 0.9 and 1−Specificity = 0.1). The predicting variables selected for 25 years of PMI were crystallinity and mineral maturity ( B). The overall success rate for 25 years of PMI was 76% (ROC-AUC = 0.90, 95% CI = 0.75–1.04, Sensitivity = 0.9 and 1−Specificity = 0.2). The predicting variables selected for 50 years of PMI were crystallinity and mineral maturity ( C). The overall success rate for 50 years of PMI was 80% (ROC-AUC = 0.92, 95% CI = 0.80-1.03, Sensitivity = 0.9 and 1−Specificity = 0.2). Analysis of the predictive probabilities for teeth with 10 years ( D), 25 years ( E) and 50 years ( F) of PMI indicated that the respective means were significantly different compared to teeth with 0 years of PMI (10 years: U = 4, p = 0.0001; 25 years: U = 10, p = 0.0015; 50 years: U = 3, p < 0.0001). More information can be found in .
The crystallographic indices of hydroxyapatite-(CaOH) found in the teeth were quantified by XRD and Rietveld refinement. Representative XRD spectra are shown in A. We observed significant overall differences in the different PMI groups. Changes over the PMI were observed in two crystallographic indices: crystallinity (F 3,36 = 5.120, p = 0.0047) and crystal size (F 3,36 = 4.356, p = 0.0102). The crystallinity index showed an increase with increasing PMI, displaying significant differences between 0 years versus 25 ( p < 0.05) and 50 ( p < 0.01) years of PMI ( B). The crystal size index showed an increase with increasing PMI, with differences between 0 years compared to the rest of the groups (10, 25 and 50 years of PMI) ( p < 0.01) ( C). Analyzing crystallinity and crystal size by gender ( D) showed changes across PMI with significant overall differences in the different groups (F 3,32 = 4.819, p = 0.0070 and F 3,32 = 4.624, p = 0.0085, respectively). Although an interaction between gender and PMI was not found ( E), a specific increase in crystallinity index and crystal size was observed in the 50-year PMI group in females ( p < 0.05).
The relative contents of tooth compounds containing amides, carbonates and phosphates were identified by ATR-FTIR in all teeth of the different PMI groups. A represents the mean values of the ATR-FTIR spectra from each PMI group. The relative content of tooth compounds was quantified to calculate relevant parameters related to tooth composition, such as mineral-to-matrix (M/M) ratio, carbonate-to-phosphate ratio (C/P ratio), mineral maturity and collagen maturity. An overall effect of PMI was found on the M/M ratio (W 3,17.99 = 11.75, p = 0.0002) and C/P ratio (W 3,16.64 = 7.71, p = 0.0019) ( B,C) but not on mineral maturity and collagen maturity ( D,E). The M/M ratio showed an increase with increasing PMI, exhibiting significant differences between 25 years versus 0 ( p < 0.05) and 10 ( p < 0.05) years of PMI, and between 50 years versus 0 ( p < 0.01), 10 ( p < 0.01) and 25 ( p < 0.05) years of PMI ( B). The C/P ratio showed a decrease with increasing PMI, exhibiting significant differences between 25 years versus 0 ( p < 0.05) and 10 ( p < 0.05) years of PMI ( C). Analyzing the M/M ratio, C/P ratio, mineral maturity and collagen maturity by gender ( F–I) revealed an overall effect of PMI on the M/M ratio (F 3,32 = 14.83, p < 0.0001) ( F) but not on the C/P ratio, mineral maturity and collagen maturity ( G–I). Interestingly, an increase in the M/M ratio was found with increasing PMI, with significant differences between 50 years versus 0 ( p < 0.05) and 10 ( p < 0.05) years of PMI in males and between 50 years versus 0 ( p < 0.01), 10 ( p < 0.01) and 25 ( p < 0.05) years of PMI in females ( F). In contrast, the C/P ratio showed a specific decrease with increasing PMI in females, with significant differences between 50 years versus 0 ( p < 0.05) and 10 ( p < 0.05) years of PMI ( G).
The relationship between the different variables in all samples was explored using the Pearson r correlation. The analysis revealed that PMI correlated positively with crystallinity (rho = 0.50, p = 0.001), crystal size (rho = 0.36, p = 0.021) and the M/M ratio (rho = 0.73, p < 0.001). However, a negative correlation was found between PMI and the C/P ratio (rho = −0.33, p = 0.039), C/P ratio and crystallinity (rho = −0.51, p = 0.001) and C/P ratio and M/M ratio (rho = −0.65, p < 0.001) ( A and ). A correlation analysis by gender revealed that PMI correlated positively with the M/M ratio (rho = 0.64, p = 0.002) in males ( B and ). The M/M ratio also correlated negatively with C/P ratio (rho = −0.49, p = 0.028) and mineral maturity (rho = −0.45, p = 0.0047) in males ( B and ). In females, PMI correlated positively with crystallinity (rho = 0.60, p = 0.005), crystal size (rho = 0.48, p = 0.034) and M/M ratio (rho = 0.84, p < 0.001) ( C and ). In contrast, the C/P ratio correlated negatively with PMI (rho = −0.76, p < 0.001), crystallinity (rho = −0.76, p < 0.001) and M/M ratio (rho = −0.91, p < 0.001) in females ( C and ).
After assessing the factor analysis and correlations, the selected model contained seven independent variables: the M/M ratio, C/P ratio, age of individuals, crystallinity, crystal size, mineral maturity and collagen maturity. The variances of the seven variables chosen were accounted for by three extracted components. According to this model, the three components (factors) together explained 70.6% of the variance associated with PMI (0, 10, 25 and 50 years) in human teeth ( and ). The most influential factor (component 1) explained 34.1% of total variance. The C/P ratio, M/M ratio, age of individuals and crystallinity had high factor loadings (−0.789, 0.875, 0.666 and 0.578, respectively). Component 2 explained 19.8% of total variance, with age of individuals, crystallinity and crystal size having high factor loadings (−0.401, 0.557 and 0.840, respectively). Component 3 explained 16.7% of total variance. Mineral maturity and collagen maturity have high factor loadings (0.797 and 0.735, respectively).
After evaluating binary logistic regression, the results showed that crystallinity, crystal size, M/M ratio and C/P ratio had the strongest association with PMI. The predicting variables selected for the 10 years of PMI were crystallinity, crystal size, M/M ratio and C/P ratio ( A). The overall success rate (percentage of correct predictions) for 10 years of PMI was 87% (ROC-AUC = 0.96, 95% CI = 0.87–1.04, Sensitivity = 0.9 and 1−Specificity = 0.1). The predicting variables selected for 25 years of PMI were crystallinity and mineral maturity ( B). The overall success rate for 25 years of PMI was 76% (ROC-AUC = 0.90, 95% CI = 0.75–1.04, Sensitivity = 0.9 and 1−Specificity = 0.2). The predicting variables selected for 50 years of PMI were crystallinity and mineral maturity ( C). The overall success rate for 50 years of PMI was 80% (ROC-AUC = 0.92, 95% CI = 0.80-1.03, Sensitivity = 0.9 and 1−Specificity = 0.2). Analysis of the predictive probabilities for teeth with 10 years ( D), 25 years ( E) and 50 years ( F) of PMI indicated that the respective means were significantly different compared to teeth with 0 years of PMI (10 years: U = 4, p = 0.0001; 25 years: U = 10, p = 0.0015; 50 years: U = 3, p < 0.0001). More information can be found in .
This study addresses one of the main challenges in forensic science (accurately determining the PMI), carrying out an assessment in a non-common tissue (tooth) and combining two biochemical approaches—XRD and FTIR—for the first time. To the best of our knowledge, this is the first study that uses XRD and ATR-FTIR jointly to predict PMI in human teeth. The results show that the combination of XRD and ATR-FTIR analysis is ideal for estimating late PMI based on time-dependent component changes in human teeth. A diagnostic test is considered “highly accurate” with an AUC value of >0.9, “useful for some purposes” with a value of 0.7–0.9 and “poor” with a value of 0.5–0.7 . Our results show that the parameters of crystallinity, crystal size, M/M ratio and C/P ratio can be considered highly accurate in determining a PMI of 10 years of data with a lower 95% CI limit of AUC > 0.9 ( ). Applying the same strict statistical interpretation, crystallinity and mineral maturity can be considered useful in determining a PMI of 25 years with a lower 95% CI limit of AUC = 0.9 ( ). Crystallinity and mineral maturity can be considered highly accurate in determining a PMI of 50 years with a lower 95% CI limit of AUC > 0.9 ( ). This method can therefore be considered highly accurate in estimating PMI at 10 and 50 years of data and useful in estimating PMI at 25 years of data in forensic and anthropological cases. In our results, the parameters of XRD and ATR-FTIR accurately discriminated between PMI times in sample teeth, with crystallinity being the most useful due to its applicability in all PMI studied. There is a close association between the mineral and the organic matrix of the mineralized samples, leading to a low degree of crystallinity of the hydroxyapatite . Molecular mechanisms during the fossilization process, such as a decrease in the organic matter, results in increased crystallinity . Therefore, it is to be expected that as a mineralized sample begins to lose organic matter in its degradation process, the degree of crystallinity of the hydroxyapatite increases, showing a larger crystal size and, therefore, changes in XRD and FTIR peaks. In this process of loss of organic matter during postmortem degradation, molecular mechanisms play a key role in the relationship between collagen structure and crystallinity. This relationship will produce changes in the strength and fragility of the mineralized samples over time . Our results clearly show that the crystallinity index, crystal size and M/M ratio increase significantly with increasing postmortem interval in sample teeth, displaying analogous results to those of XRD applied in bone samples . This may be due to the fact that bone begins to lose organic matter while crystallinity increases during the postmortem degradation process, showing a more ordered, larger size of crystals [ , , ]. Our results would appear to indicate that this same molecular mechanism could occur in the human tooth. In contrast, a decrease in the C/P ratio with increasing PMI in human teeth was observed, suggesting different results from other authors . The explanation for these differences may be due to the fact that the composition of teeth is more heterogeneous than that of bone due to the structure of the enamel, dentin and cementum that compose the teeth . In this sense, teeth and bone tissues are composed of the same elements (water, organic matter and a mineral phase) but in different proportions. The mineral matrix of bone and teeth is mainly composed of hydroxyapatite crystals. These crystals differ in size and quantity for each mineralized tissue (bone and teeth). Moreover, the tooth contains enamel, which has a very small amount of organic matrix (>1% weight (wt)) . Therefore, the differences presented can be attributed to the proportional variation of the components of the two mineralized tissues and the enamel structure. Age has implications for the characteristics of the teeth matrix . For example, the mineral content of dentin increases with age and, in turn, affects the organic content of dentin . In this regard, our results showed that age contributes to the PMI variance in Component 1 (closely related to tooth crystallography and composition) and Component 2 (closely related to tooth fragility and senescence). Regarding the effects of sex on PMI, we observed that an increase in PMI is associated with an increase in crystallinity and crystal size in females but not in males. The mean age of the females in our study is over 60 years; this circumstance may indicate that hormonal changes after menopause could influence the results obtained for an estimate of the PMI. The assessment of taphonomic changes in a body is essential in a forensic anthropology analysis. Teeth could be affected by various taphonomic and diagenetic processes, which may modify the structure and composition of teeth biomaterials . In addition, several factors can affect the decomposition of a body by accelerating or suppressing the periods of putrefaction, such as the flora, fauna, type of burial, ambient temperature, soil characteristics, humidity, rainfall, age, sex, etc. . As an example, during the burial period, accumulation of carbonate may occur over time in mineralized tissues, depending on the soil conditions at the burial site . However, teeth have a unique composition and are more protected than bones from postmortem degradation, aging and external environmental factors due to their higher inorganic composition and their inclusion in the jawbone . Our study was performed on dental samples and under in vitro and controlled conditions, so the environmental effects of real forensic cases could not be explored. This research is a preliminary investigation to evaluate the application of XRD and FTIR conjointly to estimate PMI, but, considering our results, it could be a promising alternative in the future for use in the dating of human teeth. Our study has strengths and limitations. The principal strength is the novelty of the study, it being the first one to provide a highly accurate estimation method for PMI by combining XRD and FTIR-ATR on human teeth samples stored at 10, 25 and 50 years of PMI, all using one of the hardest tissues in the human body—the tooth—thus allowing application in severely decomposed bodies. Our study has several limitations. The PMI was researched in a controlled laboratory environment, and no account was taken of influential factors, such as soil, temperature, environmental conditions, among others. In addition, the sample size of the study was small, so more studies should be replicated to be representative of the general population. Further studies are needed in order to analyze the effect of other factors on PMI in teeth (e.g., gender, age, tooth type, healthy and unhealthy tooth), since differences in the chemical composition can occur in the same teeth or between different individuals.
This study was approved by the Research Ethics Committee of Málaga Province (approval reference: ODONTAGING-2021; approval date: 16 December 2021) and conducted in accordance with the Declaration of Helsinki and national data protection legislation. Informed consent was obtained from all subjects. 4.1. Samples A total of 40 healthy human teeth (molars and premolars) were obtained from adult patients (20 females and 20 males) aged between 29 and 82 years (mean of 60 ± 11.54 years) in public and private dental clinics in Granada, Málaga and Cádiz (Spain). All the teeth studied were extracted for valid clinical reasons (periodontal disease or orthodontic treatment) and were free of caries, fillings and fractures ( ). After extraction, the teeth were washed with distilled water, and their external surfaces were cleaned with curettes to remove any extraneous material. The teeth were then stored under controlled conditions of 21 °C and 65% humidity for 0, 10, 25 and 50 years. 4.2. Sample Preparation The teeth were pulverized in liquid nitrogen using a 6770 Freezer Mill (SPEX CertiPrep FreezerMill, Stanmore, London, UK). The resulting powder was collected and stored in a −80 °C freezer until XRD and ATR-FTIR analysis. 4.3. X-ray Powder Diffraction The teeth were analyzed (~100 mg) using an Empyrean Malvern Panalytical automated X-ray diffractometer (Malvern Panalytical, Malvern, UK) and Rietveld refinement [ , , , , ]. The sample crystallinity and crystallite size patterns were collected with step size 0.017° (2θ) and 300 sec/step using Cu-Kα (λ = 1.540598 Å) radiation from a tube operated at an accelerating voltage of 45 kV and a current of 35 mA. The (002) peak was baselined from 4° to 80° (2θ) for 30 min and fitted with a Lorentzian curve to determine peak broadening as a function of its full width at half maximum. Identification of the amorphous phase and pure crystalline material was performed with reference to an external standard and the database provided by the International Center for Diffraction Data (Powder Diffraction File no. 84-1998), Inorganic Crystal Structure Database and Crystallography Open Database (COD no. 9010050; RRID:SCR_005874). Sample crystallinity (the degree of order in a solid) is defined as the ratio of the enthalpy difference between the pure amorphous phase and the sample enthalpy over the difference of the pure amorphous and the pure crystalline material (external standard). Crystallinity percentage is calculated with the following formula: (total area of crystalline peaks) x100/(total area of crystalline and amorphous peaks). The Scherrer equation (Dv = K x λ/β002 x cosθ) and Williamson–Hall method were used to calculate crystallite size (LVol-IB, nm), where Dv is the volume weighted crystallite size, K is the Scherrer constant with a value of 1, λ is the X-ray wavelength used, and β002 is the integral breadth of the (002) reflection or length of the apatite crystals along the C-axis. The R-Bragg factor, cell volume, crystal linear absorbance coefficient (1/cm) and crystal density (g/cm 3 ) were also checked. Three patterns were performed, obtaining a mean pattern for each sample. 4.4. ATR-FTIR Spectroscopy Infrared (IR) analysis of each tooth (~100 mg) was carried out in a Bruker Vertex 70 Fourier Transform (FT)-IR spectrophotometer (Bruker Corporation, Billerica, MA, USA). Attenuated total reflectance (ATR) was used with a Golden Gate System of Individual Reflection [ , , , ]. The internal reflection element was ZnSe (20,000–500 cm −1 ). For acquisition of the spectra, a standard spectral resolution of 4 cm −1 in the spectral range of 4000–500 cm −1 was used, along with 64 accumulations per sample. The background spectrum in all cases was the air. For analysis of raw spectra, the v 1 v 3 PO 4 3− bands were baselined from 1200 to 900 cm −1 , the v 2 CO 3 2− band from 890 to 850 cm −1 and the amide I band from 1730 to 1585 cm −1 . Spectral analysis was performed in triplicate, and a mean spectrum was obtained for each sample. The position, height and area under the curves (baseline correction) were measured after curve-fitting every individual (not smoothing) spectrum. The following parameters reflecting the compositional properties of dental samples were calculated [ , , , , ]: (1) mineral-to-organic matrix (M/M) ratio, an index of mineral content that characterizes the relative amount of phosphate per amount of collagen present, calculated by the ratio of the integrated areas of the respective raw peaks of v 1 v 3 PO 4 3− (1200–900 cm −1 ) and amide I (1730–1585 cm −1 ); (2) carbonate-to-phosphate ratio or carbonate-to-mineral ratio (C/P ratio), an index of phosphate-to-carbonate-substituted apatites that characterizes the degree of carbonate substitutes in the mineral lattice, calculated by the ratio of the integrated areas of the respective raw peaks of v 2 CO 3 2− (890–850 cm −1 ) and v 1 v 3 PO 4 3− (1200–900 cm −1 ); (3) mineral crystallinity or maturity (1030/1020 cm −1 intensity ratio), a degree of order in a solid, which is related to the size and perfection of crystals; and (4) collagen maturity (1660/1690 cm −1 intensity ratio), an index related to the ratio of mature, non-reducible collagen crosslinks to immature, reducible collagen crosslinks. The second derivatives of the raw data from the ATR-FTIR spectra were applied to determine specific peaks at ~1030, ~1020, ~1660 and ~1690 cm −1 and improve the accuracy of the quantification of mineral maturity and collagen crosslink ratio. 4.5. Statistical Analysis GraphPad Prism 9.0 and IBM SPSS Statistics 26.0 were used for statistical analysis. All data are represented as the mean ± standard error of the mean (SEM) of 10 determinations per experimental group ( n = 10). The normal (Gaussian) distribution of the variables was assessed by the Dallal–Wilkinson–Lilliefor corrected Kolmogorov–Smirnov test. Most variables met the assumption of Gaussian distribution ( p > 0.1), and parametric statistics were applied. Otherwise, statistical analyses were performed using non-parametric Mann–Whitney U-test for comparison between two groups. Bartlett’s test was performed to assess equal variances across groups. Firstly, the comparison of quantitative variables between PMI groups that assumed equal standard deviation (SD) were performed using an ANCOVA test, followed by Tukey’s test for multiple comparisons. Quantitative variables that did not assume equal SD were analyzed using Welch’s ANOVA test, followed by Dunnett’s T3 test ( n < 50/group) for multiple comparisons. Age was controlled, being entered as a covariate. Secondly, a two-way ANOVA test was carried out to assess the effects of PMI and gender as the main factors and the interaction between them, followed by Tukey’s test for multiple comparisons. Thirdly, a Pearson correlation test was performed to analyze the relationship between variables. Fourthly, principal component analysis with orthogonal (varimax) rotation between variables was undertaken to determine the components that account for PMI. Only variables with a factor loading of at least 0.4 (sharing at least 10% of the variance with a factor) were considered high enough for interpretation. Finally, the backward method for binary logistic regression and receiver operating characteristic (ROC) analysis was used to determine the predictability of each PMI. These steps were taken to obtain all possible combinations of the exploratory variables and to calculate the highest areas under the ROC curves (AUCs) and the overall success rates (percentage of correct predictions) of the resulting model with this combination (i.e., the one with the greatest discrimination power). A p -value below 0.05 was considered significant.
A total of 40 healthy human teeth (molars and premolars) were obtained from adult patients (20 females and 20 males) aged between 29 and 82 years (mean of 60 ± 11.54 years) in public and private dental clinics in Granada, Málaga and Cádiz (Spain). All the teeth studied were extracted for valid clinical reasons (periodontal disease or orthodontic treatment) and were free of caries, fillings and fractures ( ). After extraction, the teeth were washed with distilled water, and their external surfaces were cleaned with curettes to remove any extraneous material. The teeth were then stored under controlled conditions of 21 °C and 65% humidity for 0, 10, 25 and 50 years.
The teeth were pulverized in liquid nitrogen using a 6770 Freezer Mill (SPEX CertiPrep FreezerMill, Stanmore, London, UK). The resulting powder was collected and stored in a −80 °C freezer until XRD and ATR-FTIR analysis.
The teeth were analyzed (~100 mg) using an Empyrean Malvern Panalytical automated X-ray diffractometer (Malvern Panalytical, Malvern, UK) and Rietveld refinement [ , , , , ]. The sample crystallinity and crystallite size patterns were collected with step size 0.017° (2θ) and 300 sec/step using Cu-Kα (λ = 1.540598 Å) radiation from a tube operated at an accelerating voltage of 45 kV and a current of 35 mA. The (002) peak was baselined from 4° to 80° (2θ) for 30 min and fitted with a Lorentzian curve to determine peak broadening as a function of its full width at half maximum. Identification of the amorphous phase and pure crystalline material was performed with reference to an external standard and the database provided by the International Center for Diffraction Data (Powder Diffraction File no. 84-1998), Inorganic Crystal Structure Database and Crystallography Open Database (COD no. 9010050; RRID:SCR_005874). Sample crystallinity (the degree of order in a solid) is defined as the ratio of the enthalpy difference between the pure amorphous phase and the sample enthalpy over the difference of the pure amorphous and the pure crystalline material (external standard). Crystallinity percentage is calculated with the following formula: (total area of crystalline peaks) x100/(total area of crystalline and amorphous peaks). The Scherrer equation (Dv = K x λ/β002 x cosθ) and Williamson–Hall method were used to calculate crystallite size (LVol-IB, nm), where Dv is the volume weighted crystallite size, K is the Scherrer constant with a value of 1, λ is the X-ray wavelength used, and β002 is the integral breadth of the (002) reflection or length of the apatite crystals along the C-axis. The R-Bragg factor, cell volume, crystal linear absorbance coefficient (1/cm) and crystal density (g/cm 3 ) were also checked. Three patterns were performed, obtaining a mean pattern for each sample.
Infrared (IR) analysis of each tooth (~100 mg) was carried out in a Bruker Vertex 70 Fourier Transform (FT)-IR spectrophotometer (Bruker Corporation, Billerica, MA, USA). Attenuated total reflectance (ATR) was used with a Golden Gate System of Individual Reflection [ , , , ]. The internal reflection element was ZnSe (20,000–500 cm −1 ). For acquisition of the spectra, a standard spectral resolution of 4 cm −1 in the spectral range of 4000–500 cm −1 was used, along with 64 accumulations per sample. The background spectrum in all cases was the air. For analysis of raw spectra, the v 1 v 3 PO 4 3− bands were baselined from 1200 to 900 cm −1 , the v 2 CO 3 2− band from 890 to 850 cm −1 and the amide I band from 1730 to 1585 cm −1 . Spectral analysis was performed in triplicate, and a mean spectrum was obtained for each sample. The position, height and area under the curves (baseline correction) were measured after curve-fitting every individual (not smoothing) spectrum. The following parameters reflecting the compositional properties of dental samples were calculated [ , , , , ]: (1) mineral-to-organic matrix (M/M) ratio, an index of mineral content that characterizes the relative amount of phosphate per amount of collagen present, calculated by the ratio of the integrated areas of the respective raw peaks of v 1 v 3 PO 4 3− (1200–900 cm −1 ) and amide I (1730–1585 cm −1 ); (2) carbonate-to-phosphate ratio or carbonate-to-mineral ratio (C/P ratio), an index of phosphate-to-carbonate-substituted apatites that characterizes the degree of carbonate substitutes in the mineral lattice, calculated by the ratio of the integrated areas of the respective raw peaks of v 2 CO 3 2− (890–850 cm −1 ) and v 1 v 3 PO 4 3− (1200–900 cm −1 ); (3) mineral crystallinity or maturity (1030/1020 cm −1 intensity ratio), a degree of order in a solid, which is related to the size and perfection of crystals; and (4) collagen maturity (1660/1690 cm −1 intensity ratio), an index related to the ratio of mature, non-reducible collagen crosslinks to immature, reducible collagen crosslinks. The second derivatives of the raw data from the ATR-FTIR spectra were applied to determine specific peaks at ~1030, ~1020, ~1660 and ~1690 cm −1 and improve the accuracy of the quantification of mineral maturity and collagen crosslink ratio.
GraphPad Prism 9.0 and IBM SPSS Statistics 26.0 were used for statistical analysis. All data are represented as the mean ± standard error of the mean (SEM) of 10 determinations per experimental group ( n = 10). The normal (Gaussian) distribution of the variables was assessed by the Dallal–Wilkinson–Lilliefor corrected Kolmogorov–Smirnov test. Most variables met the assumption of Gaussian distribution ( p > 0.1), and parametric statistics were applied. Otherwise, statistical analyses were performed using non-parametric Mann–Whitney U-test for comparison between two groups. Bartlett’s test was performed to assess equal variances across groups. Firstly, the comparison of quantitative variables between PMI groups that assumed equal standard deviation (SD) were performed using an ANCOVA test, followed by Tukey’s test for multiple comparisons. Quantitative variables that did not assume equal SD were analyzed using Welch’s ANOVA test, followed by Dunnett’s T3 test ( n < 50/group) for multiple comparisons. Age was controlled, being entered as a covariate. Secondly, a two-way ANOVA test was carried out to assess the effects of PMI and gender as the main factors and the interaction between them, followed by Tukey’s test for multiple comparisons. Thirdly, a Pearson correlation test was performed to analyze the relationship between variables. Fourthly, principal component analysis with orthogonal (varimax) rotation between variables was undertaken to determine the components that account for PMI. Only variables with a factor loading of at least 0.4 (sharing at least 10% of the variance with a factor) were considered high enough for interpretation. Finally, the backward method for binary logistic regression and receiver operating characteristic (ROC) analysis was used to determine the predictability of each PMI. These steps were taken to obtain all possible combinations of the exploratory variables and to calculate the highest areas under the ROC curves (AUCs) and the overall success rates (percentage of correct predictions) of the resulting model with this combination (i.e., the one with the greatest discrimination power). A p -value below 0.05 was considered significant.
The results show that the combination of FTIR-ATR analysis and XRD is ideal for estimating late PMI based on time-dependent component changes in human teeth. According to our results, PMI has a strong association with crystallinity, crystal size, M/M ratio and C/P ratio, and the crystallographic parameter that best predicts PMI is crystallinity. Molecular mechanisms underlying the changes observed in crystallinity are related with the loss of organic matter in the PMI. In the overall analysis of our data, the combination of XRD and ATR-FTIR analyses could be a promising alternative for use in the dating of human teeth. These results may help better understand the molecular mechanisms of the degradation of human teeth and provide a basis for future practical research.
|
Update on the Molecular Pathology of Cutaneous Squamous Cell Carcinoma
|
ff1b4ed0-c7d7-4cb6-91c9-0b5e734748f8
|
10095059
|
Pathology[mh]
|
Squamous cell carcinoma (SCC) is one of the most common neoplasms (second place among all neoplasia), originating from keratinocytes in the spinous layer of keratinized stratified squamous epithelium . This origin makes possible the occurrence of SCC types at the level of all organs and tissues that contain stratified squamous epithelia, such as the skin or mucous membranes lining the hollow organs (digestive tract, oral cavity, and respiratory tract epithelium). Regarding cutaneous SCC (cSCC), it ranks second among non-melanocytic skin cancers, after basal cell carcinoma, with an increasing incidence in recent years in Europe, although the incidence rate is stable in the USA and Australia . Regarding risk factors, exposure to ultraviolet radiation (UVA and UVB) is the main cause of carcinogenesis in cSCC; by inducing DNA alterations in tumor suppressor genes or in pro-oncogenes, the risk increases through cumulative exposure throughout life. Other incriminated risk factors are represented by immunosuppression, chronic infections (especially with Human Papilloma Virus—HPV), genetic changes in genes involved in DNA repair, chronic ulceration, and chronic inflammation . The complex molecular mechanisms involved in the occurrence of cSCC, as well as the important mutational burden, translate into the presence of a large number of precursor forms of cSCC (actinic keratosis, Bowen’s disease, Queyrat’s erythroplasia, and Bownoid papulosis), as well as in situ or invasive cSCC (more than 15 different types reported in the literature from an anatomical–clinical point of view, including metatypical, verrucous, acantholytic, fusiform, pigmented, desmoplastic, mucoepidermoid, clear cell, signet ring cells, trichilemmal, inflammatory, lymphoepithelioma-like, basaloid, carcinosarcoma, papillary, and invasive Bowen’s disease) . The World Health Organization has summarized these types into six forms: verrucous SCC, acantholytic SCC, lymphoepithelial SCC, clear cell SCC, spindle cell SCC, and SCC with sarcomatoid differentiation . This review aims to highlight the main molecular mechanisms involved in carcinogenesis, as well as the epigenetic aspects that can influence treatment and treatment resistance. Most SCCs do not arise as de novo tumors, but in an incremental manner from premalignant or noninvasive precursor lesions . Actinic keratosis (AK) represents in a clinical fashion the first detectable precursor lesion of cSCC. Most AKs can either remain in the premalignant status or even regress spontaneously . A small subset of AKs acquire additional genetic and epigenetic changes and progress to cutaneous squamous cell carcinoma in situ (SCCIS) and furthermore to cSCC. The risk of evolution from AK to SCC is very difficult to predict, with the numbers varying vastly between different studies (0.025–20%) . Out of the cSCCs, only a small percentage can acquire additional genetic and epigenetic features that lead to metastatic disease . Ultraviolet (UV) exposure is considered a risk factor that initiates the mutagenic process in the skin, leading to modified keratinocytes that have a survival advantage over unmutated keratinocytes; this then leads to the risk of selection of mutated keratinocytes over time. These mutated clones can acquire further genetic or epigenetic changes, leading to AKs, and further to SCCISs and cSCCs. The development of cSCC is a multistep process requiring the accumulation of multiple genetic and epigenetic alterations in keratinocytes. These alterations lead to an augmented mutation rate by increasing cellular proliferation and reducing cell death in mutated keratinocyte population. DNA mutations are caused by either exogenous factors, such as UV radiation, chemicals, and ionizing radiation, or endogenous factors, such as reactive oxygen species (ROS), genome editing, mitotic errors, or errors in DNA repair . Cumulative lifetime exposure to UV radiation is considered to be the most important carcinogen responsible for cSCC . UV exposure over-activates the DNA repair systems of keratinocytes, leading to ATP consumption . UVB radiation can produce DNA damage through structural rearrangements due to its high photonic energy. In case of a lack of repair of the damaged DNA strand before replication, the complementary strand will integrate the change in its base, leading to a constituted mutation . This process leads to high rates of C > T transitions and CC > TT double base changes, thus generating a “UVB signature” . Multiple genes have been postulated to be involved in the development of AKs and SCCs, with several molecular pathways and mechanisms being involved . In recent years, special attention has been given to epigenetics and its involvement in the occurrence of chronic diseases in general, and cancers in particular . Epigenetic changes include all the mechanisms through which changes occur in the expression of some genes, without interfering with the sequence of the nitrogenous bases that makes up the respective genes . These are a result of the interactions between an organism and its environment, being represented by DNA methylation; histone modifications that influence the reading of certain DNA sequences; and miRNA-induced modifications, which can be transmitted from one cell to another within the same organism and even trans-generationally . In general, the DNA of tumor cells is epigenetically characterized by global hypomethylation, with areas of hypermethylation at the level of 5’ cytosine-phosphate-guanine-3’ (CpG) islands, which are generally located in the promoter regions of some key genes. The above changes lead to genomic instability, activation of oncogenes, alteration of promoters of tumor suppressor genes, as well as damage to numerous essential cellular pathways involved in DNA repair, apoptosis, cell growth, angiogenesis, etc. . In skin cancers, the involvement of epigenetics in the pathophysiology and characterization of melanoma is already recognized. These epigenetic mechanisms are considered as representing some of the earliest events in the initiation of oncogenesis . However, the role of the interactions between the genome and the environment in the appearance and development of SCC, the second most common skin cancer, is less studied. The epigenetic profile seems to represent an important tool for characterizing the aggressiveness and metastatic potential of this type of skin cancer . Moreover, multiple changes, such as CpG hypermethylation, seem to be involved in its occurrence . The hypermethylation of certain CpG areas (induced especially by the effect of ultraviolet radiation, -thereby increasing the expression of dimethyltransferase 1) leads to changes in some proteins with an important role in keratinocyte homeostasis, which is associated with aggressive behavior and metastasis . Regarding post-translational modifications at the level of histones (through the processes of phosphorylation, acetylation, sumoylation, ubiquitylation, ribosylation of DNA, and glycosylation), these changes influence the way in which DNA sequences are exposed to reading, so that the transcription of some genes involved in keratinocyte differentiation is altered . Besides DNA methylation, microRNA (miRNA) gene regulation is also present in the evolution of cSCC. Two types of miRNA are identified: those involved in the oncogenic process (which are involved in increasing cells’ proliferation and invasion capacity, the migration of keratinocytes, the formation of new cell colonies, and the loss of apoptotic capacity), and those with tumor suppressor capacity (which act by opposite mechanisms). MiR-203 is one of the most important tumor suppressor microRNAs involved in the pathogenesis of cSCC (being expressed in high levels in the skin), acting by modulating the expression of the oncogene c-MYC (suppressing its activity) and inhibiting the angiogenesis and cell cycle of tumoral cells. Additionally, a decrease in MiR-203 is associated with a low degree of cSCC differentiation and a worse prognosis . Lohcharoenkal et al. have highlighted that MiR-130a also has tumor suppressor activity in cSCC by altering the bone morphogenetic protein (BMP)/SMAD pathway involved in tumor growth and invasion capacity. Thus, lower levels of MiR-130a have been found in cSCC samples compared to precancerous lesions or healthy skin . Another miRNA that plays an important role in suppressing the proliferation and invasion of tumoral cells is miR-27; the downregulation of this MiRNA is associated not only with UVB radiation of the skin, but also with cSCC development . MiRNAs 34a, 125b, 181a, 148a, 20a, 204, 199a, 124, and 214 are some of the investigated tumor suppressor MiRNAs involved in tumor progression, cell proliferation and differentiation, angiogenesis, and cell migration by targeting the expression of essential genes involved in these pathways . Thus, lower expressions of these MiRNAs are observed in cSCC compared with normal skin. On the contrary, numerous MiRNAs has been identified to promote tumoral cell initiation and progression, acting as protooncogenes. MiR-221 is a microRNA involved in numerous cancers (gastric cancer, ovarian cancer, breast cancer, etc.), with recent studies showing an upregulation of this small RNA fragment in cSCC by suppressing phosphatase and tenesin homolog (PTEN) gene, a tumor suppressor gene . Yin et al. identified another microRNA involved in cSCC, highlighting that MiR-21 is upregulated in cSCC tissues by being involved in the invasion and metastasis of cSCC and by decreasing the activity of (tissue inhibitor of matrix metalloproteinase 3) TIMP3 gene. This gene is essential in modulating the activity of matrix metalloproteinases and molecules involved in angiogenesis, cell growth, and metastasis . MiR-186 influences the aggressive character of cSCC, with its upregulation determining the inhibition of apoptotic protease-activating factor 1 (APAF 1) . Additionally, some MiRNAs can be identified as prognosis factors. For example, a study conducted by Canueto et al. associated the presence of MiR-205 with a poor prognosis, being expressed in tumors characterized by histological risk factors, such as desmoplasia, nerve invasion, or an infiltrative character . MiR 365, 31, 142, and 135b were also found to be involved in the regulation of genes responsible for cell invasion, migration, resistance to apoptosis, and proliferation . Upregulation of MiR-664, 504, and 217 found in primary tumors seems to be associated also with the presence of an invasive behavior and a higher risk for metastatic disease. Gilespie et al. identified a group of miRNAs that are upregulated in tumors that metastasize compared to the primary one (miR-4286, miR-200a-3p, and miR-148-3p) and another group with aberrant expression in tumors with a high potential to metastasize (MiR-4286, MiR-421, MiR-4516, MiR-574-5p, MiR-135b, MiR-21, MiR-145, MiR-100, and MiR-214). Thus, these groups may be used in the future as markers of poor prognosis . Regarding the role of histone changes in cSCC initiation and progression, the literature data are poor in identifying specific histone methylation and acetylation changes in sCC, even though their role in other cancers is well known. In cSCC, Enhancer Of Zeste 2 Polycomb Repressive Complex 2 Subunit ( EZH2 ) (involved in histone methylation) seems to play a role in inhibiting the antitumoral immune response of the host, and it can be used in the future as an important target of specific antitumoral therapy . The understanding of cancer biology was revolutionized by the discovery of cancer stem cells (CSCs) by Bonnet and Dick, who described these cells in human acute myeloid leukemia . Nowadays, the molecular events happening in the microenvironment, involving also stromal cells of non-tumoral nature, and the tumoral–epithelial cell interactions are just beginning to be deciphered, but they seem to play a significant role in understanding tumor progression and resistance to therapy. In a tumor, it seems that not all cells are equal, but only a small proportion possesses the capacity of self-renewal and hierarchical differentiation, and these are the CSCs . There are several cell types that are considered as contributing to the heterogeneity: tumor cells, non-stem cancer cells, CSCs, cancer-associated fibroblasts (CAFs), endothelial cells, pericytes, tumor-associated macrophages (TAMs), mesenchymal stem cells (MSCs), and MSC-derived cells . Additionally, one must keep in mind that tissue stem cells, cells of origin (tumor-initiating cells), and CSCs are distinct concepts . Tumor-initiating cells are those subjected to the initial mutation and will develop to form the tumor detected by clinical means, while CSCs are cells responsible for the response to clinical treatments, drug resistance, tumor relapse, and propagation at distance (metastasis) . CSCs are considered to be the basis of any tumor development and responsible for intratumoral genetic and phenotypic heterogeneity . Like normal stem cells, CSCs reside in a specific microenvironment similar to stem cell niche, called a CSC niche . Moreover, CSCs are also responsible for repeating the phenotypic features of primary tumors in secondary tumors and for drug resistance . For example, in SCC, there are multiple population of CSCs with different phenotypes, including those responsible for tumorigenesis, proliferation, and tumor growth, and others responsible for the epithelial-mesenchymal transition and metastatic processes . Moreover, there are subpopulations of CSCs that remain in a dormant state, which makes them difficult to be targeted by drugs that affect the cell cycle, and are at the origin of drug resistance and relapse after chemotherapy . It is well known that cancer cells can suffer to show plasticity under the influence of microenvironmental factors (stromal cells, extracellular matrix molecules, and systemic and local growth factors), becoming CSCs that are able to transdifferentiate and dedifferentiate . In the view of recent discoveries, plasticity can be subdivided into extrinsic plasticity determined by changes in the microenvironment and intrinsic plasticity induced by specific transcription factors . In fact, based on current data, there are three general models describing tumor heterogeneity: (1) the clonal evolution (CE) model; (2) the cancer stem cell (CSC) model; and (3) the plasticity model (for details, see review ). The CE model is based on the theory of Darwinian evolution in which a single cell undergoes mutations that are then transmitted through division to the daughter cells. The most adaptable daughter cells will survive and the others will disappear as a result of natural selection . The CSC model is based on the existence of a small group of cells inside tumors that have stem cell traits and the potential to proliferate hierarchically, thus being responsible for the induction, propagation, and metastasis of tumors . The plasticity model is based on the ability of CSCs and non-CSCs to shift states among each other . Although the role of the CE model and the plasticity model are not yet highlighted in the case of cSCC, the role of CSCs in the initiation and perpetuation of non-melanocytic skin cancers (especially SCC) is well known; the number of CSCs in a tumor varies between 1–20%, with the percentage increasing with the aggressiveness of SCCs. Moreover, the types of CSCs involved in the appearance of SCCs can be accurately identified by highlighting specific cellular markers on the surface of these cells, such as CD34, CD200, and CD 44 . Regarding cancer cell plasticity, epigenetic changes seem to be involved in the regulation of tumor microenvironment, which influences the continuous transition of tumor cells from stem to non-stem cancer cells, from an active to a quiescent state, and from an epithelial to a mesenchymal status. This behavior is well known in the case of epithelial tumors, but it is insufficiently investigated in the specific case of cSCC . A lot of intrinsic and extrinsic factors are responsible for regulating the stemness in CSCs, as described in the following subsections. 5.1. Surgery Regarding treatment, the main objective is a complete removal of the tumor, along with the maximum preservation of healthy surrounding tissues and good cosmetic results. Classic early surgical excision is the treatment of choice for localized stages, with a cure rate of >90% at five years . According to the EDF-EADO-EORTC group, the limits of surgical resection are 5 mm margins for low-risk tumors and extended up to 10 mm for high-risk tumors . Mohs microsurgery with margin control may be an option in patients with high risk and/or with special anatomical locations, given the increased curability associated with minimal recurrence rates, maximum tissue preservation, and good esthetic results . A percentage of 4–5% of patients with SCC progress to more advanced stages: advanced locally, respectively metastatic diseases (<5%) with locoregional or distant metastases; those stages require other therapeutic approaches, such as chemotherapy, radiotherapy, or more recently immunotherapy. The low incidence of metastatic forms makes these forms a therapeutic challenge; the management of these patients must be based on the medical decisions of a multidisciplinary team of dermatologists, surgeons, radiotherapists, and oncologists . The staging of cSCC is performed according to the criteria established by the AJCC 8th edition Staging Manual (American Joint Committee on Cancer, 2017) and the UICC 8th edition (Union for International Cancer Control, 2017). Stratification according to risk is carried out according to the characteristics related to tumor or patient. According to the EADO guide for the diagnosis and treatment of cSCC, low-risk tumors are pT1 tumors (tumor <2 cm in its greatest dimension according to (AJCC8)) or tumors that do not present the risk factors established by the EADO. High-risk tumors are those with at least a pT2 stage (tumor larger than 2 cm) (AJCC8) or those that are associated with the EADO risk factors. However, the exact impact of each risk factor on recurrence is not known . Current treatment guidelines (AJCC 8th ed. classification, BWH classification of the Brigham Women’s Hospital, NCCN Guidelines, and EADO guidelines) attempt to systematize these risk factors in order to be able to classify patients’ stage of disease, with subsequent impact on the choice of treatment. The risk factors related to patients are immunosuppression, appearance of carcinoma in a radio-treated area or with chronic inflammation, and symptoms indicating perineural invasion. The risk factors related to tumor are diameter greater than 2 cm, location of the tumor in a high-risk area, imprecise delimited edges, rapid tumor growth, and recurrence. Radiological risk factors include bone invasion and perineural invasion. Histological risk factors include tumor thickness >6 mm, poor differentiation, high-risk histological subtypes, perineural invasion, lymphatic/vascular invasion, and subcutaneous tissue invasion . The main role of these classification systems is to choose an appropriate management for each patient with cSCC. Thus, cSCC is divided into primary cSCC and metastatic cSCC. Primary cSCC can be primary low-risk in which the treatment of choice is excision with 5 mm margins or primary high-risk in which the curative solution is excision with 6–10 mm oncological margins or Mohs micrographic surgery. Locally advanced primary cSCC and metastatic cSCC (metastases in transit, nodal metastases, or distant metastases) require a multidisciplinary and individualized approach for each patient because negative margins cannot be obtained through the surgical approach. In addition to classical therapies, such as radiotherapy, electrochemotherapy, adjuvant radiotherapy, and chemotherapy, with the understanding of tumor molecular mechanisms, new classes of drugs have been introduced, such as immunotherapy with growth factor inhibitors or combined treatments . 5.2. Radiation Therapy Radiation therapy can be used as a therapeutic option for in situ SCC in patients over 60 years of age, with multiple lesions located on the lips, or those refusing therapy, but it has a higher risk of recurrence than classic excision. It can also be an adjuvant therapy in patients with more advanced stages. For locally advanced SCC, radiotherapy can be used in case of perineural invasion or as an adjuvant method in case of positive post-excision margins. Side effects include mucositis/dermatitis; telangiectasia; hypodermic sclerosis; necrosis of the soft tissue, cartilage, and bone; decreased sensitivity; and skin carcinomas. However, the high risk of recurrence compared to complete surgical excision should be considered . 5.3. Systemic Therapy Systemic therapy is a therapeutic option in patients with locally advanced SCC and/or metastases despite previous therapies. In 2020, European interdisciplinary guidelines (EADO, EDF, and EORTC) have created a number of high-risk prognostic factors for cSCC recurrence, such as clinical features (location, symptomatic perineural invasion, and tumor size), histological features (poor differentiation, desmoplasia, thickness, and perineural invasion), immunosuppression, and radiological features (bone erosion and radiological PNI), leading to the need for therapeutic protocols in these patients . 5.3.1. Chemotherapy In advanced cSCC, systemic therapies with cytotoxic agents have been used off-label: cisplatin/carboplatin, 5-fluorouracil, bleomycin, methotrexate, taxanes, gemcitabine, and polychemotherapy are proven more effective than monotherapy, but they are associated with more severe adverse reactions. More than three decades ago, the therapies used were isotretinoin, interferon, and cytotoxic agents, which showed efficacy on cSCC but were limited in effect on metastases . Prior to the era of targeted therapy, platinum-based chemotherapies were the first line of treatment, but they were burdened by high toxicity and an increased risk of recurrence of the disease under treatment . 5.3.2. Targeted Therapy Due to recent progress made in the molecular biology of tumors, new targeted systemic therapies have been discovered to increase survival in advanced stages. Thus, cSCC is characterized by a high mutational tumor load with antigen formation, which can be targeted by the immune system. The role of the immune system in the pathogenesis of cSCC has been studied by observing the increased rate of cSCC in transplant patients or by the rapid involution of keratoacanthomas as a result of an active immune response . Immunomodulators can be used in the treatment of cSCC due to the ability of the immune system to control the carcinogenesis process. The pathogenesis of cSCC is based on keratinocyte mutation with subsequent tumor clonal expansion under the action of exogenous and endogenous factors, such as immune suppression. In this context, the important cellular feature is self-tolerance mediated by surface expression of receptors and molecules known as immune checkpoints . In cSCC, there is an excessive expression of these molecules, especially programmed cell death protein 1 (PD-1), programmed cell death ligand 1 (PD-L1), cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4), and epidermal growth factor receptor (EGFR), which are molecules that can be therapeutically targeted . In addition, there are numerous mechanisms of escape from immune surveillance through various cytokines, including increased secretion of IL-6, IL-10, and TGF-beta; decreased secretion of IL-2; and inhibiting the proliferation of CD4 + and CD8 + T lymphocytes with a role in the recognition of tumor antigens. At present, anti-programmed cell death-1 (anti-PD-1) antibodies are the first line of treatment for advanced metastatic/local cSCC that cannot be cured by local surgery or radiation . After their activation, T lymphocytes express on the surface PD-1 molecules, with a role in the apoptosis of effector T cells and inhibition of T reg cell apoptosis by binding to PD-1 and PD-L2 in tumor cells. Tumor cells may overexpress PD-L1 by escaping under immune surveillance, which is associated with metastatic and recurrent cSCC and T cell exhaustion following chronic exposure to tumor antigens . Co-inhibitory molecules play an important role in preventing hyperstimulation and autoimmunity. Programmed cell death-1 acts as a co-inhibitory receptor because it binds to T cells by binding to the PD-L1 ligand expressed in tumor cells, thereby preventing T cell activation and immunological exhaustion. This process is called immunosurveillance . PD-L1 is expressed in 3–50% of cSCC, correlating with an increased risk of metastases . The data from the literature show that PD-L1 expression is associated with high-risk SCC: infiltrative patterns, immunosuppression, and perineural invasion . Currently, given that 50% of tumors do not respond to immunotherapy, attention is focused on identifying predictive factors for response, including tumoral genes and biomarkers in the peripheral blood. Thus, the tumor markers used may be PD-L1 status, IFN gamma expression, and tumor-infiltrating lymphocytes (TILs). Liquid biopsy markers can be immunophenotypic profile, cytokines and chemokines (IL-6), and soluble markers (sCTLA4 and sPD-L1) . Anti-PD-1 Agents Cemiplimab Cemiplimab is the first systemic therapy evaluated in prospective studies in patients with advanced cSCC. Approved for use in advanced cSCC therapy among patients who are not candidates for surgery or radiation therapy in September 2018 by the Food and Drug Administration (FDA) and in July 2019 by the European Medicines Agency (EMA), Cemiplimab is a humanized monoclonal antibody IgG4 with an affinity for PD-1. At doses of 350 mg iv every three weeks, it blocks the interaction of PD-1 with PD-L1 at the tumor level, thus restoring T-cell activity and antitumor response . The data from the literature demonstrate efficacy (response rates of up to 46.1%) and sustained response while maintaining disease control in approximately 72% of patients with advanced SCC treated with Cemiplimab . The efficacy of Cemiplimab has been tested in several phase I and phase II studies. In a phase I study with patients with locally advanced or metastatic disease led by M.R. Migden et al. (2018), a response rate of 50% was obtained, while in the cohort with metastatic disease (phase 2 study), a response rate of 47% was obtained. Of these patients, 7% had a response lasting more than six months, and side effects were reported in 15% of them . Cemiplimab has a good safety profile. The most common side effects reported are diarrhea, fatigue, constipation, and rash, which can be resolved by adjusting treatment doses or sometimes stopping treatment, but the therapeutic benefits and long-term response outweigh the risks of side effects . Cemiplimab has recently been included in clinical trials as adjuvant, neoadjuvant, or combined adjuvant/neoadjuvant therapy in patients with resectable or partially resectable SCC . Nivolumab Nivolumab is a PD-1 inhibitor approved by the FDA in November 2016 for the treatment of head and neck SCC, following a study that enrolled 361 patients receiving Nivolumab at a dosage of 3 mg/kg every two weeks, with improved overall survival . Pembrolizumab There are ongoing studies on the effectiveness of Pembrolizumab on cSCC. The interim results of the Keynote 629 study with a mean follow-up of 9.5 months, in which 200 mg/3 weeks of Pembrolizumab was used, showed a 32% response rate in 91 patients using it as the second line of treatment and a 50% response rate in 14 naive patients, but the average duration of response was unknown . Anti-CTLA-4 Antibody Regarding Ipilimumab, Day et al. reported a case of metastatic cSCC in a melanoma patient who received Ipilimumab every three weeks and completed three cycles. The patient responded to the therapy with decreasing cSCC metastases after three cycles of treatment, obtaining a partial response without significant adverse drug reactions . Anti CTLA-4 Antibodies Combined with PD-1 Antibodies Therapies can be successfully combined but at the cost of increasing side effects. Miller et al. reported the case of a 68-year-old patient who developed metastatic cSCC three years after kidney transplantation for which he received combination therapy with Ipilimumab and nivolumab with therapeutic response. The patient soon developed kidney failure, the transplant was removed, and he died a few months later due to cardiopulmonary arrest, which could not be attributed to the therapy . Epidermal Growth Factor Receptor Inhibitors (iEGFR) EGFR is a family of proteins that the human epidermal growth factor (HER) belongs, which activation determines the activation of multiple signaling pathways, including mitogen-activated protein kinase/Extracellular signal-regulated kinase ½ (MAPK/ERK) and phosphatidylinositol-3-kinase/protein kinase B/mammalian target of rapamycin(PI3K/AKT/mTOR), and plays a role in maturation, proliferation, inhibition of apoptosis even at the tumor level, leading to tumor growth. Regarding skin cancers, EGFR mutations have a low incidence of 2.5–5%, but are associated with the risk of metastases and, therefore, with a worse prognosis, thus becoming a therapeutic target. This has led to the discovery of anti-EGFR agents, cetuximab or panitumumab monoclonal antibodies, that competitively inhibit EGF receptors, or small molecules that target intracellular domains, including gefitinib or erlotinib . Cetuximab Cetuximab is a chimeric immunoglobulin that binds to the extracellular domain 3 of EGFR. Thus, it determines an adaptive and innate immune response by downregulating the immunosuppressive mechanisms and decreasing PD-L1 (programmed death ligand 1) induced by IFN-gamma. By modulating the PD-1 axis, treatment with Cetuximab may lead to a decrease in the therapeutic effect of immunotherapies (PD-1 inhibitors) in patients with recurrent cSCC . Cetuximab has been included in various studies to show its effectiveness. In a phase 3 study in France that included 36 patients with metastatic or advanced cSCC, locoregional response to the treatment was 28%, with a mean duration of response of seven months. Another phase 2 study in which cetuximab was used as a monotherapy in the treatment of unresectable cSCC found stabilization of the disease in 58% of cases. . Commonly reported side effects were infections, tumor bleeding, infusion-related reactions, interstitial pneumonia. However, more studies are needed to examine the effectiveness of anti-EGFR and the possibility of combining them with other therapies . 5.4. Novel Approaches 5.4.1. Radiotherapy Associated with Immunotherapy A new approach in cSCC management is the combination of radiation therapy and immunotherapy. Radiation causes damage to both the tumor and the surrounding normal tissues by stimulating the immune system. Thus, irradiation causes MHC-1 expression in tumor cells, triggering the recruitment of effector immune cells, with some, even with specific antitumor responses, acting synergistically with checkpoint inhibitors . 5.4.2. Oncolytic Viruses Oncolytic viruses targeting tumor cells cause less immune-tolerant tumor microenvironment, thereby causing subsequent cytokine expression, which acts synergistically with checkpoint inhibitors by increasing tumor CD8+ and interferon gamma (IFN-gamma) signaling and up-regulating PD-L1 in the tumor microenvironment. Such compounds include RP1 (Replimune -1), a modified herpes simplex 1, which can induce tumor regression by stimulating GALV-GP R-protein (glycoprotein of gibbon ape leukemia virus) and GM-CSF (granulocyte/macrophage colony-stimulating factor) when used alone or in combination with nivolumab. Another oncolytic virus is Talimogene laherparepvec (TVEC), a non-neurovirulating herpesvirus simplex capable of inducing GM-CSF . 5.5. Transplant Recipients Transplant patients, due to prolonged immunosuppression, have a higher risk of cSCC with a much more aggressive character and a much higher risk of metastases. In these patients, a benefit of the switch from immunosuppressive therapy to sirolimus was observed, with minimal effects on the graft and no negative effects on patient survival. This is due to the observation that immunotherapies should be used with caution because anti-PD1 agents can cause irreversible allograft rejection, and anti-CTLA-4s have been shown to be better tolerated . Regarding treatment, the main objective is a complete removal of the tumor, along with the maximum preservation of healthy surrounding tissues and good cosmetic results. Classic early surgical excision is the treatment of choice for localized stages, with a cure rate of >90% at five years . According to the EDF-EADO-EORTC group, the limits of surgical resection are 5 mm margins for low-risk tumors and extended up to 10 mm for high-risk tumors . Mohs microsurgery with margin control may be an option in patients with high risk and/or with special anatomical locations, given the increased curability associated with minimal recurrence rates, maximum tissue preservation, and good esthetic results . A percentage of 4–5% of patients with SCC progress to more advanced stages: advanced locally, respectively metastatic diseases (<5%) with locoregional or distant metastases; those stages require other therapeutic approaches, such as chemotherapy, radiotherapy, or more recently immunotherapy. The low incidence of metastatic forms makes these forms a therapeutic challenge; the management of these patients must be based on the medical decisions of a multidisciplinary team of dermatologists, surgeons, radiotherapists, and oncologists . The staging of cSCC is performed according to the criteria established by the AJCC 8th edition Staging Manual (American Joint Committee on Cancer, 2017) and the UICC 8th edition (Union for International Cancer Control, 2017). Stratification according to risk is carried out according to the characteristics related to tumor or patient. According to the EADO guide for the diagnosis and treatment of cSCC, low-risk tumors are pT1 tumors (tumor <2 cm in its greatest dimension according to (AJCC8)) or tumors that do not present the risk factors established by the EADO. High-risk tumors are those with at least a pT2 stage (tumor larger than 2 cm) (AJCC8) or those that are associated with the EADO risk factors. However, the exact impact of each risk factor on recurrence is not known . Current treatment guidelines (AJCC 8th ed. classification, BWH classification of the Brigham Women’s Hospital, NCCN Guidelines, and EADO guidelines) attempt to systematize these risk factors in order to be able to classify patients’ stage of disease, with subsequent impact on the choice of treatment. The risk factors related to patients are immunosuppression, appearance of carcinoma in a radio-treated area or with chronic inflammation, and symptoms indicating perineural invasion. The risk factors related to tumor are diameter greater than 2 cm, location of the tumor in a high-risk area, imprecise delimited edges, rapid tumor growth, and recurrence. Radiological risk factors include bone invasion and perineural invasion. Histological risk factors include tumor thickness >6 mm, poor differentiation, high-risk histological subtypes, perineural invasion, lymphatic/vascular invasion, and subcutaneous tissue invasion . The main role of these classification systems is to choose an appropriate management for each patient with cSCC. Thus, cSCC is divided into primary cSCC and metastatic cSCC. Primary cSCC can be primary low-risk in which the treatment of choice is excision with 5 mm margins or primary high-risk in which the curative solution is excision with 6–10 mm oncological margins or Mohs micrographic surgery. Locally advanced primary cSCC and metastatic cSCC (metastases in transit, nodal metastases, or distant metastases) require a multidisciplinary and individualized approach for each patient because negative margins cannot be obtained through the surgical approach. In addition to classical therapies, such as radiotherapy, electrochemotherapy, adjuvant radiotherapy, and chemotherapy, with the understanding of tumor molecular mechanisms, new classes of drugs have been introduced, such as immunotherapy with growth factor inhibitors or combined treatments . Radiation therapy can be used as a therapeutic option for in situ SCC in patients over 60 years of age, with multiple lesions located on the lips, or those refusing therapy, but it has a higher risk of recurrence than classic excision. It can also be an adjuvant therapy in patients with more advanced stages. For locally advanced SCC, radiotherapy can be used in case of perineural invasion or as an adjuvant method in case of positive post-excision margins. Side effects include mucositis/dermatitis; telangiectasia; hypodermic sclerosis; necrosis of the soft tissue, cartilage, and bone; decreased sensitivity; and skin carcinomas. However, the high risk of recurrence compared to complete surgical excision should be considered . Systemic therapy is a therapeutic option in patients with locally advanced SCC and/or metastases despite previous therapies. In 2020, European interdisciplinary guidelines (EADO, EDF, and EORTC) have created a number of high-risk prognostic factors for cSCC recurrence, such as clinical features (location, symptomatic perineural invasion, and tumor size), histological features (poor differentiation, desmoplasia, thickness, and perineural invasion), immunosuppression, and radiological features (bone erosion and radiological PNI), leading to the need for therapeutic protocols in these patients . 5.3.1. Chemotherapy In advanced cSCC, systemic therapies with cytotoxic agents have been used off-label: cisplatin/carboplatin, 5-fluorouracil, bleomycin, methotrexate, taxanes, gemcitabine, and polychemotherapy are proven more effective than monotherapy, but they are associated with more severe adverse reactions. More than three decades ago, the therapies used were isotretinoin, interferon, and cytotoxic agents, which showed efficacy on cSCC but were limited in effect on metastases . Prior to the era of targeted therapy, platinum-based chemotherapies were the first line of treatment, but they were burdened by high toxicity and an increased risk of recurrence of the disease under treatment . 5.3.2. Targeted Therapy Due to recent progress made in the molecular biology of tumors, new targeted systemic therapies have been discovered to increase survival in advanced stages. Thus, cSCC is characterized by a high mutational tumor load with antigen formation, which can be targeted by the immune system. The role of the immune system in the pathogenesis of cSCC has been studied by observing the increased rate of cSCC in transplant patients or by the rapid involution of keratoacanthomas as a result of an active immune response . Immunomodulators can be used in the treatment of cSCC due to the ability of the immune system to control the carcinogenesis process. The pathogenesis of cSCC is based on keratinocyte mutation with subsequent tumor clonal expansion under the action of exogenous and endogenous factors, such as immune suppression. In this context, the important cellular feature is self-tolerance mediated by surface expression of receptors and molecules known as immune checkpoints . In cSCC, there is an excessive expression of these molecules, especially programmed cell death protein 1 (PD-1), programmed cell death ligand 1 (PD-L1), cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4), and epidermal growth factor receptor (EGFR), which are molecules that can be therapeutically targeted . In addition, there are numerous mechanisms of escape from immune surveillance through various cytokines, including increased secretion of IL-6, IL-10, and TGF-beta; decreased secretion of IL-2; and inhibiting the proliferation of CD4 + and CD8 + T lymphocytes with a role in the recognition of tumor antigens. At present, anti-programmed cell death-1 (anti-PD-1) antibodies are the first line of treatment for advanced metastatic/local cSCC that cannot be cured by local surgery or radiation . After their activation, T lymphocytes express on the surface PD-1 molecules, with a role in the apoptosis of effector T cells and inhibition of T reg cell apoptosis by binding to PD-1 and PD-L2 in tumor cells. Tumor cells may overexpress PD-L1 by escaping under immune surveillance, which is associated with metastatic and recurrent cSCC and T cell exhaustion following chronic exposure to tumor antigens . Co-inhibitory molecules play an important role in preventing hyperstimulation and autoimmunity. Programmed cell death-1 acts as a co-inhibitory receptor because it binds to T cells by binding to the PD-L1 ligand expressed in tumor cells, thereby preventing T cell activation and immunological exhaustion. This process is called immunosurveillance . PD-L1 is expressed in 3–50% of cSCC, correlating with an increased risk of metastases . The data from the literature show that PD-L1 expression is associated with high-risk SCC: infiltrative patterns, immunosuppression, and perineural invasion . Currently, given that 50% of tumors do not respond to immunotherapy, attention is focused on identifying predictive factors for response, including tumoral genes and biomarkers in the peripheral blood. Thus, the tumor markers used may be PD-L1 status, IFN gamma expression, and tumor-infiltrating lymphocytes (TILs). Liquid biopsy markers can be immunophenotypic profile, cytokines and chemokines (IL-6), and soluble markers (sCTLA4 and sPD-L1) . Anti-PD-1 Agents Cemiplimab Cemiplimab is the first systemic therapy evaluated in prospective studies in patients with advanced cSCC. Approved for use in advanced cSCC therapy among patients who are not candidates for surgery or radiation therapy in September 2018 by the Food and Drug Administration (FDA) and in July 2019 by the European Medicines Agency (EMA), Cemiplimab is a humanized monoclonal antibody IgG4 with an affinity for PD-1. At doses of 350 mg iv every three weeks, it blocks the interaction of PD-1 with PD-L1 at the tumor level, thus restoring T-cell activity and antitumor response . The data from the literature demonstrate efficacy (response rates of up to 46.1%) and sustained response while maintaining disease control in approximately 72% of patients with advanced SCC treated with Cemiplimab . The efficacy of Cemiplimab has been tested in several phase I and phase II studies. In a phase I study with patients with locally advanced or metastatic disease led by M.R. Migden et al. (2018), a response rate of 50% was obtained, while in the cohort with metastatic disease (phase 2 study), a response rate of 47% was obtained. Of these patients, 7% had a response lasting more than six months, and side effects were reported in 15% of them . Cemiplimab has a good safety profile. The most common side effects reported are diarrhea, fatigue, constipation, and rash, which can be resolved by adjusting treatment doses or sometimes stopping treatment, but the therapeutic benefits and long-term response outweigh the risks of side effects . Cemiplimab has recently been included in clinical trials as adjuvant, neoadjuvant, or combined adjuvant/neoadjuvant therapy in patients with resectable or partially resectable SCC . Nivolumab Nivolumab is a PD-1 inhibitor approved by the FDA in November 2016 for the treatment of head and neck SCC, following a study that enrolled 361 patients receiving Nivolumab at a dosage of 3 mg/kg every two weeks, with improved overall survival . Pembrolizumab There are ongoing studies on the effectiveness of Pembrolizumab on cSCC. The interim results of the Keynote 629 study with a mean follow-up of 9.5 months, in which 200 mg/3 weeks of Pembrolizumab was used, showed a 32% response rate in 91 patients using it as the second line of treatment and a 50% response rate in 14 naive patients, but the average duration of response was unknown . Anti-CTLA-4 Antibody Regarding Ipilimumab, Day et al. reported a case of metastatic cSCC in a melanoma patient who received Ipilimumab every three weeks and completed three cycles. The patient responded to the therapy with decreasing cSCC metastases after three cycles of treatment, obtaining a partial response without significant adverse drug reactions . Anti CTLA-4 Antibodies Combined with PD-1 Antibodies Therapies can be successfully combined but at the cost of increasing side effects. Miller et al. reported the case of a 68-year-old patient who developed metastatic cSCC three years after kidney transplantation for which he received combination therapy with Ipilimumab and nivolumab with therapeutic response. The patient soon developed kidney failure, the transplant was removed, and he died a few months later due to cardiopulmonary arrest, which could not be attributed to the therapy . Epidermal Growth Factor Receptor Inhibitors (iEGFR) EGFR is a family of proteins that the human epidermal growth factor (HER) belongs, which activation determines the activation of multiple signaling pathways, including mitogen-activated protein kinase/Extracellular signal-regulated kinase ½ (MAPK/ERK) and phosphatidylinositol-3-kinase/protein kinase B/mammalian target of rapamycin(PI3K/AKT/mTOR), and plays a role in maturation, proliferation, inhibition of apoptosis even at the tumor level, leading to tumor growth. Regarding skin cancers, EGFR mutations have a low incidence of 2.5–5%, but are associated with the risk of metastases and, therefore, with a worse prognosis, thus becoming a therapeutic target. This has led to the discovery of anti-EGFR agents, cetuximab or panitumumab monoclonal antibodies, that competitively inhibit EGF receptors, or small molecules that target intracellular domains, including gefitinib or erlotinib . Cetuximab Cetuximab is a chimeric immunoglobulin that binds to the extracellular domain 3 of EGFR. Thus, it determines an adaptive and innate immune response by downregulating the immunosuppressive mechanisms and decreasing PD-L1 (programmed death ligand 1) induced by IFN-gamma. By modulating the PD-1 axis, treatment with Cetuximab may lead to a decrease in the therapeutic effect of immunotherapies (PD-1 inhibitors) in patients with recurrent cSCC . Cetuximab has been included in various studies to show its effectiveness. In a phase 3 study in France that included 36 patients with metastatic or advanced cSCC, locoregional response to the treatment was 28%, with a mean duration of response of seven months. Another phase 2 study in which cetuximab was used as a monotherapy in the treatment of unresectable cSCC found stabilization of the disease in 58% of cases. . Commonly reported side effects were infections, tumor bleeding, infusion-related reactions, interstitial pneumonia. However, more studies are needed to examine the effectiveness of anti-EGFR and the possibility of combining them with other therapies . In advanced cSCC, systemic therapies with cytotoxic agents have been used off-label: cisplatin/carboplatin, 5-fluorouracil, bleomycin, methotrexate, taxanes, gemcitabine, and polychemotherapy are proven more effective than monotherapy, but they are associated with more severe adverse reactions. More than three decades ago, the therapies used were isotretinoin, interferon, and cytotoxic agents, which showed efficacy on cSCC but were limited in effect on metastases . Prior to the era of targeted therapy, platinum-based chemotherapies were the first line of treatment, but they were burdened by high toxicity and an increased risk of recurrence of the disease under treatment . Due to recent progress made in the molecular biology of tumors, new targeted systemic therapies have been discovered to increase survival in advanced stages. Thus, cSCC is characterized by a high mutational tumor load with antigen formation, which can be targeted by the immune system. The role of the immune system in the pathogenesis of cSCC has been studied by observing the increased rate of cSCC in transplant patients or by the rapid involution of keratoacanthomas as a result of an active immune response . Immunomodulators can be used in the treatment of cSCC due to the ability of the immune system to control the carcinogenesis process. The pathogenesis of cSCC is based on keratinocyte mutation with subsequent tumor clonal expansion under the action of exogenous and endogenous factors, such as immune suppression. In this context, the important cellular feature is self-tolerance mediated by surface expression of receptors and molecules known as immune checkpoints . In cSCC, there is an excessive expression of these molecules, especially programmed cell death protein 1 (PD-1), programmed cell death ligand 1 (PD-L1), cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4), and epidermal growth factor receptor (EGFR), which are molecules that can be therapeutically targeted . In addition, there are numerous mechanisms of escape from immune surveillance through various cytokines, including increased secretion of IL-6, IL-10, and TGF-beta; decreased secretion of IL-2; and inhibiting the proliferation of CD4 + and CD8 + T lymphocytes with a role in the recognition of tumor antigens. At present, anti-programmed cell death-1 (anti-PD-1) antibodies are the first line of treatment for advanced metastatic/local cSCC that cannot be cured by local surgery or radiation . After their activation, T lymphocytes express on the surface PD-1 molecules, with a role in the apoptosis of effector T cells and inhibition of T reg cell apoptosis by binding to PD-1 and PD-L2 in tumor cells. Tumor cells may overexpress PD-L1 by escaping under immune surveillance, which is associated with metastatic and recurrent cSCC and T cell exhaustion following chronic exposure to tumor antigens . Co-inhibitory molecules play an important role in preventing hyperstimulation and autoimmunity. Programmed cell death-1 acts as a co-inhibitory receptor because it binds to T cells by binding to the PD-L1 ligand expressed in tumor cells, thereby preventing T cell activation and immunological exhaustion. This process is called immunosurveillance . PD-L1 is expressed in 3–50% of cSCC, correlating with an increased risk of metastases . The data from the literature show that PD-L1 expression is associated with high-risk SCC: infiltrative patterns, immunosuppression, and perineural invasion . Currently, given that 50% of tumors do not respond to immunotherapy, attention is focused on identifying predictive factors for response, including tumoral genes and biomarkers in the peripheral blood. Thus, the tumor markers used may be PD-L1 status, IFN gamma expression, and tumor-infiltrating lymphocytes (TILs). Liquid biopsy markers can be immunophenotypic profile, cytokines and chemokines (IL-6), and soluble markers (sCTLA4 and sPD-L1) . Anti-PD-1 Agents Cemiplimab Cemiplimab is the first systemic therapy evaluated in prospective studies in patients with advanced cSCC. Approved for use in advanced cSCC therapy among patients who are not candidates for surgery or radiation therapy in September 2018 by the Food and Drug Administration (FDA) and in July 2019 by the European Medicines Agency (EMA), Cemiplimab is a humanized monoclonal antibody IgG4 with an affinity for PD-1. At doses of 350 mg iv every three weeks, it blocks the interaction of PD-1 with PD-L1 at the tumor level, thus restoring T-cell activity and antitumor response . The data from the literature demonstrate efficacy (response rates of up to 46.1%) and sustained response while maintaining disease control in approximately 72% of patients with advanced SCC treated with Cemiplimab . The efficacy of Cemiplimab has been tested in several phase I and phase II studies. In a phase I study with patients with locally advanced or metastatic disease led by M.R. Migden et al. (2018), a response rate of 50% was obtained, while in the cohort with metastatic disease (phase 2 study), a response rate of 47% was obtained. Of these patients, 7% had a response lasting more than six months, and side effects were reported in 15% of them . Cemiplimab has a good safety profile. The most common side effects reported are diarrhea, fatigue, constipation, and rash, which can be resolved by adjusting treatment doses or sometimes stopping treatment, but the therapeutic benefits and long-term response outweigh the risks of side effects . Cemiplimab has recently been included in clinical trials as adjuvant, neoadjuvant, or combined adjuvant/neoadjuvant therapy in patients with resectable or partially resectable SCC . Nivolumab Nivolumab is a PD-1 inhibitor approved by the FDA in November 2016 for the treatment of head and neck SCC, following a study that enrolled 361 patients receiving Nivolumab at a dosage of 3 mg/kg every two weeks, with improved overall survival . Pembrolizumab There are ongoing studies on the effectiveness of Pembrolizumab on cSCC. The interim results of the Keynote 629 study with a mean follow-up of 9.5 months, in which 200 mg/3 weeks of Pembrolizumab was used, showed a 32% response rate in 91 patients using it as the second line of treatment and a 50% response rate in 14 naive patients, but the average duration of response was unknown . Anti-CTLA-4 Antibody Regarding Ipilimumab, Day et al. reported a case of metastatic cSCC in a melanoma patient who received Ipilimumab every three weeks and completed three cycles. The patient responded to the therapy with decreasing cSCC metastases after three cycles of treatment, obtaining a partial response without significant adverse drug reactions . Anti CTLA-4 Antibodies Combined with PD-1 Antibodies Therapies can be successfully combined but at the cost of increasing side effects. Miller et al. reported the case of a 68-year-old patient who developed metastatic cSCC three years after kidney transplantation for which he received combination therapy with Ipilimumab and nivolumab with therapeutic response. The patient soon developed kidney failure, the transplant was removed, and he died a few months later due to cardiopulmonary arrest, which could not be attributed to the therapy . Epidermal Growth Factor Receptor Inhibitors (iEGFR) EGFR is a family of proteins that the human epidermal growth factor (HER) belongs, which activation determines the activation of multiple signaling pathways, including mitogen-activated protein kinase/Extracellular signal-regulated kinase ½ (MAPK/ERK) and phosphatidylinositol-3-kinase/protein kinase B/mammalian target of rapamycin(PI3K/AKT/mTOR), and plays a role in maturation, proliferation, inhibition of apoptosis even at the tumor level, leading to tumor growth. Regarding skin cancers, EGFR mutations have a low incidence of 2.5–5%, but are associated with the risk of metastases and, therefore, with a worse prognosis, thus becoming a therapeutic target. This has led to the discovery of anti-EGFR agents, cetuximab or panitumumab monoclonal antibodies, that competitively inhibit EGF receptors, or small molecules that target intracellular domains, including gefitinib or erlotinib . Cetuximab Cetuximab is a chimeric immunoglobulin that binds to the extracellular domain 3 of EGFR. Thus, it determines an adaptive and innate immune response by downregulating the immunosuppressive mechanisms and decreasing PD-L1 (programmed death ligand 1) induced by IFN-gamma. By modulating the PD-1 axis, treatment with Cetuximab may lead to a decrease in the therapeutic effect of immunotherapies (PD-1 inhibitors) in patients with recurrent cSCC . Cetuximab has been included in various studies to show its effectiveness. In a phase 3 study in France that included 36 patients with metastatic or advanced cSCC, locoregional response to the treatment was 28%, with a mean duration of response of seven months. Another phase 2 study in which cetuximab was used as a monotherapy in the treatment of unresectable cSCC found stabilization of the disease in 58% of cases. . Commonly reported side effects were infections, tumor bleeding, infusion-related reactions, interstitial pneumonia. However, more studies are needed to examine the effectiveness of anti-EGFR and the possibility of combining them with other therapies . Cemiplimab Cemiplimab is the first systemic therapy evaluated in prospective studies in patients with advanced cSCC. Approved for use in advanced cSCC therapy among patients who are not candidates for surgery or radiation therapy in September 2018 by the Food and Drug Administration (FDA) and in July 2019 by the European Medicines Agency (EMA), Cemiplimab is a humanized monoclonal antibody IgG4 with an affinity for PD-1. At doses of 350 mg iv every three weeks, it blocks the interaction of PD-1 with PD-L1 at the tumor level, thus restoring T-cell activity and antitumor response . The data from the literature demonstrate efficacy (response rates of up to 46.1%) and sustained response while maintaining disease control in approximately 72% of patients with advanced SCC treated with Cemiplimab . The efficacy of Cemiplimab has been tested in several phase I and phase II studies. In a phase I study with patients with locally advanced or metastatic disease led by M.R. Migden et al. (2018), a response rate of 50% was obtained, while in the cohort with metastatic disease (phase 2 study), a response rate of 47% was obtained. Of these patients, 7% had a response lasting more than six months, and side effects were reported in 15% of them . Cemiplimab has a good safety profile. The most common side effects reported are diarrhea, fatigue, constipation, and rash, which can be resolved by adjusting treatment doses or sometimes stopping treatment, but the therapeutic benefits and long-term response outweigh the risks of side effects . Cemiplimab has recently been included in clinical trials as adjuvant, neoadjuvant, or combined adjuvant/neoadjuvant therapy in patients with resectable or partially resectable SCC . Nivolumab Nivolumab is a PD-1 inhibitor approved by the FDA in November 2016 for the treatment of head and neck SCC, following a study that enrolled 361 patients receiving Nivolumab at a dosage of 3 mg/kg every two weeks, with improved overall survival . Pembrolizumab There are ongoing studies on the effectiveness of Pembrolizumab on cSCC. The interim results of the Keynote 629 study with a mean follow-up of 9.5 months, in which 200 mg/3 weeks of Pembrolizumab was used, showed a 32% response rate in 91 patients using it as the second line of treatment and a 50% response rate in 14 naive patients, but the average duration of response was unknown . Regarding Ipilimumab, Day et al. reported a case of metastatic cSCC in a melanoma patient who received Ipilimumab every three weeks and completed three cycles. The patient responded to the therapy with decreasing cSCC metastases after three cycles of treatment, obtaining a partial response without significant adverse drug reactions . Therapies can be successfully combined but at the cost of increasing side effects. Miller et al. reported the case of a 68-year-old patient who developed metastatic cSCC three years after kidney transplantation for which he received combination therapy with Ipilimumab and nivolumab with therapeutic response. The patient soon developed kidney failure, the transplant was removed, and he died a few months later due to cardiopulmonary arrest, which could not be attributed to the therapy . EGFR is a family of proteins that the human epidermal growth factor (HER) belongs, which activation determines the activation of multiple signaling pathways, including mitogen-activated protein kinase/Extracellular signal-regulated kinase ½ (MAPK/ERK) and phosphatidylinositol-3-kinase/protein kinase B/mammalian target of rapamycin(PI3K/AKT/mTOR), and plays a role in maturation, proliferation, inhibition of apoptosis even at the tumor level, leading to tumor growth. Regarding skin cancers, EGFR mutations have a low incidence of 2.5–5%, but are associated with the risk of metastases and, therefore, with a worse prognosis, thus becoming a therapeutic target. This has led to the discovery of anti-EGFR agents, cetuximab or panitumumab monoclonal antibodies, that competitively inhibit EGF receptors, or small molecules that target intracellular domains, including gefitinib or erlotinib . Cetuximab Cetuximab is a chimeric immunoglobulin that binds to the extracellular domain 3 of EGFR. Thus, it determines an adaptive and innate immune response by downregulating the immunosuppressive mechanisms and decreasing PD-L1 (programmed death ligand 1) induced by IFN-gamma. By modulating the PD-1 axis, treatment with Cetuximab may lead to a decrease in the therapeutic effect of immunotherapies (PD-1 inhibitors) in patients with recurrent cSCC . Cetuximab has been included in various studies to show its effectiveness. In a phase 3 study in France that included 36 patients with metastatic or advanced cSCC, locoregional response to the treatment was 28%, with a mean duration of response of seven months. Another phase 2 study in which cetuximab was used as a monotherapy in the treatment of unresectable cSCC found stabilization of the disease in 58% of cases. . Commonly reported side effects were infections, tumor bleeding, infusion-related reactions, interstitial pneumonia. However, more studies are needed to examine the effectiveness of anti-EGFR and the possibility of combining them with other therapies . 5.4.1. Radiotherapy Associated with Immunotherapy A new approach in cSCC management is the combination of radiation therapy and immunotherapy. Radiation causes damage to both the tumor and the surrounding normal tissues by stimulating the immune system. Thus, irradiation causes MHC-1 expression in tumor cells, triggering the recruitment of effector immune cells, with some, even with specific antitumor responses, acting synergistically with checkpoint inhibitors . 5.4.2. Oncolytic Viruses Oncolytic viruses targeting tumor cells cause less immune-tolerant tumor microenvironment, thereby causing subsequent cytokine expression, which acts synergistically with checkpoint inhibitors by increasing tumor CD8+ and interferon gamma (IFN-gamma) signaling and up-regulating PD-L1 in the tumor microenvironment. Such compounds include RP1 (Replimune -1), a modified herpes simplex 1, which can induce tumor regression by stimulating GALV-GP R-protein (glycoprotein of gibbon ape leukemia virus) and GM-CSF (granulocyte/macrophage colony-stimulating factor) when used alone or in combination with nivolumab. Another oncolytic virus is Talimogene laherparepvec (TVEC), a non-neurovirulating herpesvirus simplex capable of inducing GM-CSF . A new approach in cSCC management is the combination of radiation therapy and immunotherapy. Radiation causes damage to both the tumor and the surrounding normal tissues by stimulating the immune system. Thus, irradiation causes MHC-1 expression in tumor cells, triggering the recruitment of effector immune cells, with some, even with specific antitumor responses, acting synergistically with checkpoint inhibitors . Oncolytic viruses targeting tumor cells cause less immune-tolerant tumor microenvironment, thereby causing subsequent cytokine expression, which acts synergistically with checkpoint inhibitors by increasing tumor CD8+ and interferon gamma (IFN-gamma) signaling and up-regulating PD-L1 in the tumor microenvironment. Such compounds include RP1 (Replimune -1), a modified herpes simplex 1, which can induce tumor regression by stimulating GALV-GP R-protein (glycoprotein of gibbon ape leukemia virus) and GM-CSF (granulocyte/macrophage colony-stimulating factor) when used alone or in combination with nivolumab. Another oncolytic virus is Talimogene laherparepvec (TVEC), a non-neurovirulating herpesvirus simplex capable of inducing GM-CSF . Transplant patients, due to prolonged immunosuppression, have a higher risk of cSCC with a much more aggressive character and a much higher risk of metastases. In these patients, a benefit of the switch from immunosuppressive therapy to sirolimus was observed, with minimal effects on the graft and no negative effects on patient survival. This is due to the observation that immunotherapies should be used with caution because anti-PD1 agents can cause irreversible allograft rejection, and anti-CTLA-4s have been shown to be better tolerated . Advanced squamous cell carcinomas are a challenge for clinicians, and even with a multidisciplinary approach, they remain difficult to treat. Often, advanced tumors do not respond to classic treatment options, so new approaches are needed, but equally important are the prevention and detection of early stage tumors that can lead to an excellent prognosis. Most new therapies are still in clinical trials or need to be approved, but their therapeutic benefit is certain. Thus, for a better understanding of the pathophysiology of cSCC, the molecular, genetic, and epigenetic mechanisms behind the behaviors of tumor cells is the key to new targeted therapies, with minimal side effects for patients.
|
Carbon Nanomaterials: Emerging Roles in Immuno-Oncology
|
9b591bc9-cbfa-407a-b09e-dc54d17fc4bf
|
10095276
|
Internal Medicine[mh]
|
Traditionally, the major theranostics for cancer patients have been surgery, chemotherapy, and radiation . With a better knowledge of the relationship between oncology and immunology, it is now possible to use patients’ immune systems to fight cancer . Systemic toxicity, cancer recurrence, and metastasis, on the other hand, have an impact on patients’ prognoses. Fortunately, current advances in immuno-oncology have recognized that prospective treatment strategies should address this unfulfilled need to prevent aggressive cancer relapses . Cancer immunotherapies that can induce immunological memory have demonstrated a lasting inhibitory effect on cancers’ growth, recurrence, and metastasis . Immune checkpoint blockade (ICB) and chimeric antigen receptor T (CAR-T) cell treatment are examples of cancer immunotherapies that have increased overall survival in a subgroup of patients, particularly in those with hematological tumors. However, only a subset of patients and/or certain cancer types respond favorably to immunotherapy, mainly owing to the immunosuppressive milieu of the solid tumor and immune resistance to mono-therapeutics . Moreover, the systemic delivery of immunotherapeutic medicines might result in severe autoimmune toxicities. To boost the activity of the immune response, innovative drug delivery techniques with improved targeting and tumor microenvironment (TME)-modifying capabilities are critically needed for cancer immunotherapy. Nanomaterials have gained much attention as prospective cancer therapy options because they can integrate multifunctional components (such as immunostimulants and chemotherapeutic medicines) and exhibit distinctive physicochemical features . All nanomaterials made of carbon atoms are referred to as carbon-based or carbon nanomaterials (CNMs), which have received a tremendous amount of attention in recent years. In this review, we provide a mechanism-based summary of CNMs in the antitumor immune response, and highlight the benefits and limitations of CNMs for improving the immunomodulatory effect of current cancer therapy.
Typically, based on their dimensional and geometrical structure, CNMs can be classified into four categories: 0D (zero-dimensional) CNMs (fullerenes, particulate diamonds, and carbon dots), 1D (one-dimensional) CNMs (carbon nanotubes, carbon nanofibers, and diamond nanorods), 2D (two-dimensional) CNMs (graphene, graphite sheets, and diamond nanoplatelets), and 3D (three-dimensional) CNMs (nanostructured diamond-like carbon films, nanocrystalline diamond films, and fullerite) . Carbon nanostructures can be tube-shaped (single-walled nanotubes (SWCNTs) and multiwalled nanotubes (MWCNTs), horn-shaped (nanohorns), or spheres or ellipsoids (fullerenes). Fullerenes are carbon molecules or molecular forms of carbon, whereas graphene is a single sheet of carbon atoms . Nevertheless, carbon nanomaterials have been successfully manipulated to generate nanoscale carbon particles (carbon dots) and graphene-based materials known as graphene quantum dots (GQDs) capable of biological uses . Furthermore, depending on their carbon hybridization, CNMs can exhibit a wide range of crystallinity, including various proportion of sp 2 and sp 3 carbon bonds. CNMs are very flexible due to their unique characteristics, which allow them to form alternative covalent or noncovalent bonds with other carbon atoms or elements to diversify their functionalization . summarizes the categorization and basic structural features of CNMs based on their dimensions.
The functionalization of CNMs is a popular method for tuning the hydrophilicity of carbon nanostructures while also imparting biocompatible properties. This process involves grafting functional groups onto the surface of CNMs, resulting in the development of stable structures. Notably, biomedical applications such as immuno-oncology need total biocompatibility in order to avoid undesirable immune system responses . The functionalization of CNMs has the potential to change their physical and chemical characteristics, as well as increase their therapeutic efficacy and bioactivity, reduce the immune response, and enable targeted drug delivery . This implies that the ease of chemically modifying CNMs provide another layer of capacity to create new systems that can be adapted for specific interventions in immuno-oncology. CNTs, graphene, CDs, and fullerenes have been reported to potentially increase diagnostic accuracy for tumors and infectious diseases. Moreover, most nanoparticles can deliver medications to tumor cells either passively (through selectively enhanced permeability and retention of the tumor’s vasculature) or actively (by endocytic pathways). Molecules capped with different ligands bind to cell receptors and enter the cells via endocytosis, delivering a larger concentration of the drug to the interior of a cancer cell while causing less cytotoxicity to normal cells . For instance, the anticancer drug doxorubicin (DOX) and the magnetic resonance imaging (MRI) contrast agent gadolinium-diethylenetriamine penta-acetic acid (Gd-DTPA) can be loaded into an asparagine-glycine-arginine (NGR) peptide-modified SWCNTs system to enter and accumulated within tumor cells, allowing chemotherapy and tumor diagnosis to be combined in one system . Similarly, a photo-theranostic agent based on sinoporphyrin sodium (DVDMS)-loaded PEGylated graphene oxide (GO-PEG-DVDMS) was developed. This GO-PEG vehicle greatly boosted the efficiency of DVDMS in accumulating in tumors and the effectiveness of photodynamic therapy (PDT) in U87MG human glioma tumor cells in vivo . Notably, Moon et al. demonstrated the in vivo destruction of solid malignant tumors using polyethylene glycol-coated single-walled carbon nanotubes (PEG-SWCNTs) coupled with near-infrared (NIR) irradiation. The photothermal impact of PEG-SWCNTs was investigated in nude mice with human epidermoid oral carcinoma KB tumor cells. Tumors were completely destroyed in the mice treated with PEG-SWCNTs followed by NIR irradiation . Intriguingly, the benefits of functionalized CNMs may not only be beneficial for solid tumors but also hematological malignancies . For example, polyethylene glycol-coated discrete MWCNTs (PEG-dMWCNTs) were designed with strong binding of DOX and targeting molecules (alendronate) in mice with Burkitt’s lymphoma, which decreased the cancer burden and enhanced survival. PEG-dMWCNTs therefore offered a potential novel nanocarrier platform for the safe delivery of drugs for hematological malignancies . These examples demonstrated that CNMs are crucial in the field of cancer theranostics because they provide several benefits such as enhanced detection, tumor-specific drug delivery, and less fatal effects on normal tissues during cancer treatment . Furthermore, CNMs are inexpensive, stable, and biodegradable, and have good photothermal conversion in the NIR range, making them promising candidates for photoacoustic imaging and photothermal therapy (PTT) . The photothermal heat can stimulate dying tumor cells to release antigens, pro-inflammatory cytokines, and immunogenic intracellular substrates, promoting immune activation during immunogenic cell death (ICD). In one study, dendritic cells (DCs) collected the released damaged-associated molecular patterns (DAMPs) and tumor-associated antigens (TAAs), then processed and transmitted them to the adaptive immune cells to trigger antitumor immune responses . Therefore, a deep understanding of the immunomodulatory activities of CNMs can help develop more effective therapeutics by harnessing the immunoregulatory effects of CNMs to achieve durable efficacy following cancer treatments.
The majority of cancer immunotherapies focus on the administration of tumor-associated antigens (TAAs) and tumor-specific antigens (TSAs). When the encoded antigen is translated to proteins in the cytoplasm of antigen-presenting cells (APCs), it can trigger an antigen-specific immune response. APCs process these proteins and display them on major histocompatibility complex (MHC) Class I (MHC I) to CD8+ T lymphocytes, promoting cell-mediated immune responses. Additionally, the MHC II trafficking signals produced from lysosomal proteins can also induce a supportive CD4+ T helper cell response if fused with an mRNA-encoded antigen, which is important in cancer immunotherapy . This implies that combining nanoparticles with adjuvants may boost the activation of the immune response against cancer if effectively delivered to the target cells . Nanomaterial-based delivery techniques have previously provided a suitable solution to cancer immunotherapy’s essential challenges . The primary hurdles for cancer immunotherapies can be ascribed to the lack of delivery mechanisms that can keep therapeutic payloads accessible to their targets . However, due to their extensive tunable functional groups and drug-carrying abilities, nanomaterials can enable tailored drug delivery to tumor locations or immunological organs. By reacting to internal or external stimuli, they can perform specific functionalities such as drug integration, effective biological barrier penetration, accurate administration of immunomodulators, and regulated release to enable effective tumor immunotherapy . Nonetheless, despite biomedical nanotechnology’s substantial contribution to health care management, major efforts are being made to solve difficulties such as their poor repeatability, specificity, effectiveness, and cost. In addition, the drug nanocarriers used should be biocompatible and stimulus-responsive in order to execute regulated drug delivery and discharge, including in the brain . As a result, various classes of nanomaterials have been created to address these inefficiencies. Lipid-based nanoparticles, CNMs, polymer-based nanomaterials, and metal-based nanomaterials are some examples of nanomaterial groupings. Because several lipid-based nanoparticles have previously been chosen for clinical studies, it is worthwhile to compare them with CNMs in terms of immuno-oncology. Liposomes are examples of lipid-based nanomaterials. They are primarily made up of phospholipids that can create both unilamellar and multilamellar vesicles, which enable them to transport and distribute hydrophilic, hydrophobic, and lipophilic drugs, as well as entangle hydrophilic and lipophilic molecules in the same system . In contrast, CNTs, which are one of the most common examples of CNMs, are highly insoluble and must often be chemically treated before they can be dispersed in various liquids. Their insolubility in the most common dispersing agents, such as surfactants or polymers, results in a colloidal dispersion rather than a solution, which may limit their use in drug delivery in immunotherapy . Another hurdle posed by CNMs is the biodistribution and pharmacokinetics of nanoparticles, which are influenced by a variety of physicochemical properties, such as their shape, size, chemical composition, aggregation, solubility, and functionalization. Particles smaller than 100 nm have been reported to increase hazardous effects to the lung, the evasion of typical phagocytic defenses, structural changes in proteins, activation of inflammatory and immunological responses, and possible redistribution from their site of accumulation . However, the key advantages of nonstructured lipid nanostructures which were developed from structured lipid nanostructures include the ability to be loaded with hydrophilic and hydrophobic drugs, to be surface-modified, to allow for site-specific targeting and controlled release of the drug, and their low in vivo toxicity. However, there are significant drawbacks as well, such as drug ejection following polymorphic transition of the lipid from the nanocarrier matrix during storage, and poor loading capacity . Using resonance Raman spectroscopy, the oxidative stability of three typical 1D CNMs, including linear carbon chains, CNTs, and graphene nanoribbons, were systematically studied and found to be thermally stable up to 500 °C . However, Holm et al. reported that lipid-based formulations were less optimized and could contain traces of peroxides with the potential to catalyze their degradation. This degradation could create difficulties in achieving the shelf-life of the formulation for supporting preclinical and clinical trial studies. Furthermore, there are no documented antioxidants that can be added to the formulations to prevent this degradation, which indicates that a relationship between stability and lipid-based nanomaterials needs to be clarified . Notably, lipid-based nanoparticles possess advantages which include high temporal and thermal stability, high loading capacity, ease of preparation, low production costs, and large-scale industrial production, since they can be prepared from natural sources . The industry’s expertise, in addition to its scientific and manufacturing platforms, could greatly influence the choice to utilize either lipid-based formulations or a CNM formulation. Unfortunately, this information has not been clearly spelt out. As a result, the public domain does not fully explain the determining relevant composition of the formulation of the molecule. A detailed investigation of successful events and the knowledge on how to establish their composition might help to lower perceptions of risk. The essential in vitro features in the design of these formulations for identifying the optimum composition of the formulation, as well as appropriate quality methodologies, must be examined for in vitro and in vivo applications. More research is needed to determine which animal species should be utilized to explore individual formulations . Nonetheless, the unique characteristics of lipid-based nanomaterials and CNMs can enable them to selectively modulate critical signaling pathways inside diverse immune cell populations through their material compositions, shapes, or contact alterations to elicit significant antitumor effects . For instance, carbon- and lipid-based nanoparticles may both be encapsulated with antigens and used for systemic delivery into APCs, similar to DCs. DCs then stimulate antitumor T cell responses by antigen translation and cross-presentation . Nanoparticles imprinted with tumor antigens can increase the transport to APCs in lymphoid organs, leading to better DC maturation and T cell-mediated tumor death. Aside from delivery, nanoparticles can induce anticancer immune cell phenotypes . Carboxylated MWCNTs (MWCNTs-COOH), for example, can limit tumors from spreading by modulating the polarization of macrophages .
Immune dysfunction is linked to an increased risk of certain cancers . This indicates that appropriate immune activation may protect against some cancers. Furthermore, tumor cells are genetically unstable and may be difficult to target utilizing particular therapy regimens due to the tumor’s resistance . This implies that the immunomodulatory activities of CNMs can be leveraged to minimize the progression of cancer . Through a combinational application with chemotherapy, phototherapy or radiotherapy, CNMs may elicit the activation of T cells against tumor cells and enhance anticancer efficacy with lesser toxicity. 5.1. CNTs Hassan et al. demonstrated that effective tumor elimination necessitates a stronger antitumor immune response. They used MWCNTs as tumor antigen nanocarriers to deliver immunoadjuvants such as cytosine-phosphate-guanine oligodeoxynucleotide (CpG) and anti-CD40 Ig (CD40) with the model antigen ovalbumin (OVA) to elicit an immune response against OVA-expressing tumor cells. The MWCNTs boosted the CpG-mediated adjuvanticity, as evidenced by the dramatically higher OVA-specific T cell responses in vitro and in C57BL/6 mice. MWCNTs significantly increased the efficacy of coloaded OVA, CpG, and CD40 to prevent the proliferation of OVA-expressing B16F10 melanoma cells in pseudometastatic subcutaneous or lung tumor models . Additionally, CNTs were demonstrated to be good CpG delivery vehicles in CX3CR1GFP mouse models. First, functionalized single-walled carbon nanotubes (CNT-CpG) were examined and confirmed to be nontoxic. Secondly, this functionalization increased the absorption of CpG in vitro as well as in intracranial gliomas. CNT-mediated administration of CpG also increased the production of proinflammatory cytokines by primary monocytes. Surprisingly, a single intracranial injection of low-dose CNT-CpG eliminated intracranial GL261 gliomas in half of tumor-bearing animals by activating NK and CD8 cells. Furthermore, the surviving mice were protected from the recurrence of intracranial tumors, indicating the activation of long-term anticancer immunity. These findings have immediate implications for future CpG immunotherapy studies . In another study, acid-functionalized MWCNTs (ox-MWCNTs) were coupled with hyperthermia therapy to treat breast cancer. EMT6 tumor-bearing mice were treated with ox-MWCNTs and local hyperthermia at 43 °C, which resulted in full eradication of the tumor and a considerable improvement in the mice’s median survival. In addition, there was an increase in the infiltration and maturation of DCs in mice. Furthermore, a considerable increase in tumor-infiltrating CD8+ and CD4+ T cells, as well as macrophages and NKs, was found in tumors treated with ox-MWCNTs–hypothermia combination therapy . Nevertheless, SWCNTs have been proven to be antigen carriers capable of transporting antigens into APCs and eliciting humoral immune responses against weak tumor antigens. In this case, Wilm’s tumor protein (WT1) ligands, an upregulated protein in many human leukemias and cancers, were covalently attached onto solubilized SWCNT scaffolds to form SWCNT–peptide constructs. These constructs were rapidly absorbed by professional APCs (dendritic and macrophages) in vitro. Additionally, immunization of BALB/c mice with SWCNT–peptide constructs paired with immunological adjuvant elicited specific IgG responses against the peptide, but not against the peptide alone or in combination with the adjuvant, showing that the SWCNTs were not immunogenic . Proteins that interact with smaller nanoparticles tend to preserve their structure far better than those that interact with bigger ones because smaller nanoparticles have a higher surface curvature, which limits the area of contact with the proteins . Therefore, any interaction between the CNMs with the proteins may alter their functionality. Along the same lines, another investigation was conducted using OT-1 mice (mice in which CD8+ T cells developed a transgenic TCR specific for the SIIN peptide of ovalbumin displayed on H-2Kb). To circumvent the denaturing effects of their direct adsorption on CNTs, a simple yet robust technique of noncovalently attaching the T cell stimulus to the CNT substrates was developed. This demonstrated that CNT-based substrates can be designed to deliver MHC-I effectively for antigen-specific activation of T cells. They investigated the interaction of MHC-I with CNTs in a wide variety of other proteins to assess the stability and function of a physiological multimeric protein, MHC-I, on CNTs for applications linked to antigen-specific T cell activation. When compared with a soluble control under identical settings, the technique increased antigen-specific T cell responses by more than thrice. This study shed light on how noncovalent chemistry and adaptor proteins may be used to provide complex stimuli on CNT substrates . When bundled SWCNTs are chemically treated to generate functionalized bundled SWNTs (f-bSWNTs), it improves protein adsorption compared with conventionally bundled SWCNTs. Indeed, f-bSWNTs have been discovered to be efficient antigen-presenting substrates. Splenocytes obtained from the spleens of C57BL/6 mice were treated with T cell antigens and costimulatory ligands (CD3 and CD28) adsorbed on these substrates to examine the kinetics of T cells’ responses on the surface of the nanotubes. The stimulation of primary T lymphocytes isolated from mouse spleens by these antibody-adsorbed substrates was measured by the cytokine secretion of traditional activation determinants such as interleukin-2 and interferon gamma (IFN-γ). The adsorption of T-cell-stimulating antibodies has been demonstrated to improve both the kinetics and amount of T cell activation. When compared with comparable artificial substrates with a large surface area and similar chemistry, this improvement is unique to f-bSWCNTs. These results supported the utilization of chemically processed nanotube bundles as an effective substrate for antigen presentation and indicate their potential utility in clinical applications requiring the presentation of artificial antigen . Previously, Fadel et al. investigated the utilization of SWCNT bundles in the presentation of T-cell-activating antibodies to evoke immune responses against specific targets such as tumors. Because of the vast surface area of these bundles, T-cell-stimulating antibodies, such as anti-CD3, can be delivered at high local concentrations, resulting in powerful activation of T cells. Therefore, antibody stimuli adsorbed onto SWCNT bundles constitute a unique model for the effective activation of lymphocytes, with implications for fundamental science and clinical immunotherapy . 5.2. Graphene Yue et al. synthesized graphene oxide (GO) with an OVA (ovalbumin) antigen construct and tested the efficiency of its immune activation in E.G7-OVA tumor-bearing mice utilizing bone marrow dendritic cells (BMDC, primary professional cells for antigen presentation). In addition, the levels of the costimulator CD86 and the MHC II molecule were examined after GO-OVA uptake in vitro and found to be elevated. In an E.G7 tumor-bearing mouse model, tumor development was considerably inhibited in the GO-OVA group. Because of the two-dimensional graphene oxide’s unique bio- or physiochemical characteristics, GO-OVA increased cell recruitment, antigen transport, and antigen cross-presentation to CD8 cytotoxic T cells (GO). It also caused autophagy to be activated, which contributed to the programmatic activation of particular CD8 T cells in vivo . GO functionalized by polyethylene glycol (PEG) and polyethylenimine (PEI) has been reported as a vaccination adjuvant for immunotherapy, with urease B (Ure B) as a model antigen, which is the particular antigen for Helicobacter pylori and has been recognized as a Class I carcinogen for gastric cancer. The treatment of DCs with GO-PEG-PEI significantly increased the production of interleukin 12 (IL-12), which is critical in the stimulation of NKs and T lymphocytes. Importantly, it accelerated the maturation of DCs and increased their cytokine release by activating several toll-like receptor (TLR) pathways. Furthermore, this GO-PEG-PEI worked as an antigen carrier, efficiently transporting antigens into DCs, implying prospects for cancer immunotherapy . Wang et al. proposed a unique alum-based adjuvant formulation by creating AlO(OH)-modified graphene oxide (GO) nanosheets (GO-AlO(OH)), which, in addition to preserving the induction of the humoral immune response by AlO(OH), may also elicit a cellular immunological response by GO. A GO-AlO(OH) vaccine formulation was created by including the antigen using a simple mixing/adsorption method. In mouse models, antigen-loaded GO-AlO(OH) nanocomplexes increased antigen absorption and boosted the activation of DCs, eliciting greater antigen-specific IgG titers, producing a strong CD4+ and CD8+ T lymphocyte response, and suppressing the development of melanoma tumor . 5.3. Fullerenes There is growing evidence that fullerene-based nanomaterials such as C 60 (OH) 20 nanoparticles have antitumor activity . Water-soluble C 60 (OH) 20 nanoparticles have demonstrated effective antitumor immunomodulatory effects on immune cells such T cells and macrophages both in vivo and in vitro. For instance, they boosted the production of Th1 cytokines (IL-2, IFN-γ, and TNF-α), which helped to kill tumor cells through increased production of CD4+/CD8+ lymphocytes . Another fullerene-based nanomaterial derivative is fullerol (C 60 (OH) x ). C 60 (OH) x was investigated for tumor-inhibitory activity in the H22 hepatocarcinoma mouse model. C 60 (OH) x improved the phagocytotic activity of peritoneal macrophages. Additionally, C 60 (OH) x -treated macrophages generated more tumor necrosis factor alpha in vitro, implying that C 60 (OH) x can boost innate immunity in tumor-bearing mice, thereby limiting the development of tumors . 5.4. CDs To achieve the desired cancer immunotherapy, polymer-coated CDs were evenly inserted into an ordered framework of mesoporous silica nanoparticles (CD@MSNs). The acquired CD@MSN was not biodegradable, but it could perform photothermal imaging-guided PTT in vivo. Interestingly, it was discovered that CD@MSN-mediated PTT could achieve immune-mediated prevention of tumor metastasis by promoting the proliferation and activation of NKs and macrophages while upregulating the production of cytokines such as IFN-γ and granzyme B. This study offered an unconventional method of producing biodegradable mesoporous silica and gave novel insights into the anticancer immunity associated with biodegradable nanoparticles . To create chiral nanovaccines for cancer immunotherapy, researchers have used chiral CDs as carriers and adjuvants, as well as ovalbumin (OVA) as an antigen model. This design was efficiently internalized by bone marrow-derived DCs (BMDCs) from mice. Because of their fluorescence, chiral CDs could measure cellular absorption noninvasively and elicit a robust immunological response with increased BMDC maturation, T cell proliferation, and cytokine release. Second, it elicited a robust antitumor T-cell-mediated immune response and suppressed the development of B16-OVA melanoma tumors implanted in C57BL/6 mice. In vitro tests revealed that chiral CDs had a comparable capacity to LPS for inducing the maturation of BMDCs. This study proposed a novel method for producing multifunctional nanovaccines for enhanced cancer treatment . As vaccine adjuvants, photoluminescent CDs were coupled with the model tumor protein antigen ovalbumin (OVA). These CDs greatly enhanced antigen absorption and the maturation of DCs. The CD–OVA nanocomposite dramatically enhanced the levels of the costimulatory molecules CD80 and CD86, which were uses as markers of DCs’ maturation. In addition, DCs produced more tumor necrosis factor (TNF-α). Furthermore, CD–OVA was demonstrated to significantly boost the proliferation of splenocyte and the production of IFN-γ. Interestingly, this CD–OVA vaccination was successfully endocytosed and processed by immune cells in vivo, resulting in significant antigen-specific cellular immune responses that inhibited the development of B16-OVA melanoma cancer in C57BL/6 mice . 5.5. Nanodiamonds Another type of nontoxic CNM, fluorescent nanodiamonds (FNDs), was used to excite NKs and monocytes as an approach to boost antitumor activity. The absorption of FNDs and immune cell activation were significantly dose-dependent, as evaluated by the increased production of monocyte-derived TNF-α and NK cell-derived IFN-γ. Following subcutaneous injection, FNDs were seen in wild-type BALB/c mice . To boost the DC-driven anti-GBM immune response, doxorubicin–polyglycerol–nanodiamond composites (nano-DOX), a potent inducer of DAMPs was created. In vitro, nano-DOX stimulated both human and animal DCs to inhibit glioblastoma cancer cells. Furthermore, nano-Dox promoted the infiltration and activation of mouse bone marrow-derived DCs as well as lymphocytes into glioblastoma xenografts. This suggested that administering nano-DOX through DCs might increase GC immunogenicity and elicit an anticancer immune response in GBM . In summary, CNMs have emerged as a promising novel therapeutic platform to influence the immune response, particularly in cancer immunotherapy, due to their inherent characteristics and functionalization, targeted drug administration, and interactions with immune cells. CNMs may deliver carrier materials to particular cells, such as vaccines to APCs, to stimulate the immune system significantly. CNMs also permit stimulus responses in different cancer cells for combination cancer immune therapy. Furthermore, in order to activate the host immune response significantly and safely against cancer cells, CNMs have been engineered to transport antigens and chemotherapeutic agents to tumor cells. Through their combined application with chemotherapy, phototherapy, or radiotherapy, CNMs may elicit the activation of T cells against tumor cells and enhance anticancer efficacy with lesser toxicity . Owing to the intrinsic high absorption of NIR by several CNMs, including CNTs and graphene-derived materials such as GO, the pairing of CNMs with phototherapy has gained popularity in recent years . We have summarized the immunomodulatory properties of CNMs in preclinical cancer studies in .
Hassan et al. demonstrated that effective tumor elimination necessitates a stronger antitumor immune response. They used MWCNTs as tumor antigen nanocarriers to deliver immunoadjuvants such as cytosine-phosphate-guanine oligodeoxynucleotide (CpG) and anti-CD40 Ig (CD40) with the model antigen ovalbumin (OVA) to elicit an immune response against OVA-expressing tumor cells. The MWCNTs boosted the CpG-mediated adjuvanticity, as evidenced by the dramatically higher OVA-specific T cell responses in vitro and in C57BL/6 mice. MWCNTs significantly increased the efficacy of coloaded OVA, CpG, and CD40 to prevent the proliferation of OVA-expressing B16F10 melanoma cells in pseudometastatic subcutaneous or lung tumor models . Additionally, CNTs were demonstrated to be good CpG delivery vehicles in CX3CR1GFP mouse models. First, functionalized single-walled carbon nanotubes (CNT-CpG) were examined and confirmed to be nontoxic. Secondly, this functionalization increased the absorption of CpG in vitro as well as in intracranial gliomas. CNT-mediated administration of CpG also increased the production of proinflammatory cytokines by primary monocytes. Surprisingly, a single intracranial injection of low-dose CNT-CpG eliminated intracranial GL261 gliomas in half of tumor-bearing animals by activating NK and CD8 cells. Furthermore, the surviving mice were protected from the recurrence of intracranial tumors, indicating the activation of long-term anticancer immunity. These findings have immediate implications for future CpG immunotherapy studies . In another study, acid-functionalized MWCNTs (ox-MWCNTs) were coupled with hyperthermia therapy to treat breast cancer. EMT6 tumor-bearing mice were treated with ox-MWCNTs and local hyperthermia at 43 °C, which resulted in full eradication of the tumor and a considerable improvement in the mice’s median survival. In addition, there was an increase in the infiltration and maturation of DCs in mice. Furthermore, a considerable increase in tumor-infiltrating CD8+ and CD4+ T cells, as well as macrophages and NKs, was found in tumors treated with ox-MWCNTs–hypothermia combination therapy . Nevertheless, SWCNTs have been proven to be antigen carriers capable of transporting antigens into APCs and eliciting humoral immune responses against weak tumor antigens. In this case, Wilm’s tumor protein (WT1) ligands, an upregulated protein in many human leukemias and cancers, were covalently attached onto solubilized SWCNT scaffolds to form SWCNT–peptide constructs. These constructs were rapidly absorbed by professional APCs (dendritic and macrophages) in vitro. Additionally, immunization of BALB/c mice with SWCNT–peptide constructs paired with immunological adjuvant elicited specific IgG responses against the peptide, but not against the peptide alone or in combination with the adjuvant, showing that the SWCNTs were not immunogenic . Proteins that interact with smaller nanoparticles tend to preserve their structure far better than those that interact with bigger ones because smaller nanoparticles have a higher surface curvature, which limits the area of contact with the proteins . Therefore, any interaction between the CNMs with the proteins may alter their functionality. Along the same lines, another investigation was conducted using OT-1 mice (mice in which CD8+ T cells developed a transgenic TCR specific for the SIIN peptide of ovalbumin displayed on H-2Kb). To circumvent the denaturing effects of their direct adsorption on CNTs, a simple yet robust technique of noncovalently attaching the T cell stimulus to the CNT substrates was developed. This demonstrated that CNT-based substrates can be designed to deliver MHC-I effectively for antigen-specific activation of T cells. They investigated the interaction of MHC-I with CNTs in a wide variety of other proteins to assess the stability and function of a physiological multimeric protein, MHC-I, on CNTs for applications linked to antigen-specific T cell activation. When compared with a soluble control under identical settings, the technique increased antigen-specific T cell responses by more than thrice. This study shed light on how noncovalent chemistry and adaptor proteins may be used to provide complex stimuli on CNT substrates . When bundled SWCNTs are chemically treated to generate functionalized bundled SWNTs (f-bSWNTs), it improves protein adsorption compared with conventionally bundled SWCNTs. Indeed, f-bSWNTs have been discovered to be efficient antigen-presenting substrates. Splenocytes obtained from the spleens of C57BL/6 mice were treated with T cell antigens and costimulatory ligands (CD3 and CD28) adsorbed on these substrates to examine the kinetics of T cells’ responses on the surface of the nanotubes. The stimulation of primary T lymphocytes isolated from mouse spleens by these antibody-adsorbed substrates was measured by the cytokine secretion of traditional activation determinants such as interleukin-2 and interferon gamma (IFN-γ). The adsorption of T-cell-stimulating antibodies has been demonstrated to improve both the kinetics and amount of T cell activation. When compared with comparable artificial substrates with a large surface area and similar chemistry, this improvement is unique to f-bSWCNTs. These results supported the utilization of chemically processed nanotube bundles as an effective substrate for antigen presentation and indicate their potential utility in clinical applications requiring the presentation of artificial antigen . Previously, Fadel et al. investigated the utilization of SWCNT bundles in the presentation of T-cell-activating antibodies to evoke immune responses against specific targets such as tumors. Because of the vast surface area of these bundles, T-cell-stimulating antibodies, such as anti-CD3, can be delivered at high local concentrations, resulting in powerful activation of T cells. Therefore, antibody stimuli adsorbed onto SWCNT bundles constitute a unique model for the effective activation of lymphocytes, with implications for fundamental science and clinical immunotherapy .
Yue et al. synthesized graphene oxide (GO) with an OVA (ovalbumin) antigen construct and tested the efficiency of its immune activation in E.G7-OVA tumor-bearing mice utilizing bone marrow dendritic cells (BMDC, primary professional cells for antigen presentation). In addition, the levels of the costimulator CD86 and the MHC II molecule were examined after GO-OVA uptake in vitro and found to be elevated. In an E.G7 tumor-bearing mouse model, tumor development was considerably inhibited in the GO-OVA group. Because of the two-dimensional graphene oxide’s unique bio- or physiochemical characteristics, GO-OVA increased cell recruitment, antigen transport, and antigen cross-presentation to CD8 cytotoxic T cells (GO). It also caused autophagy to be activated, which contributed to the programmatic activation of particular CD8 T cells in vivo . GO functionalized by polyethylene glycol (PEG) and polyethylenimine (PEI) has been reported as a vaccination adjuvant for immunotherapy, with urease B (Ure B) as a model antigen, which is the particular antigen for Helicobacter pylori and has been recognized as a Class I carcinogen for gastric cancer. The treatment of DCs with GO-PEG-PEI significantly increased the production of interleukin 12 (IL-12), which is critical in the stimulation of NKs and T lymphocytes. Importantly, it accelerated the maturation of DCs and increased their cytokine release by activating several toll-like receptor (TLR) pathways. Furthermore, this GO-PEG-PEI worked as an antigen carrier, efficiently transporting antigens into DCs, implying prospects for cancer immunotherapy . Wang et al. proposed a unique alum-based adjuvant formulation by creating AlO(OH)-modified graphene oxide (GO) nanosheets (GO-AlO(OH)), which, in addition to preserving the induction of the humoral immune response by AlO(OH), may also elicit a cellular immunological response by GO. A GO-AlO(OH) vaccine formulation was created by including the antigen using a simple mixing/adsorption method. In mouse models, antigen-loaded GO-AlO(OH) nanocomplexes increased antigen absorption and boosted the activation of DCs, eliciting greater antigen-specific IgG titers, producing a strong CD4+ and CD8+ T lymphocyte response, and suppressing the development of melanoma tumor .
There is growing evidence that fullerene-based nanomaterials such as C 60 (OH) 20 nanoparticles have antitumor activity . Water-soluble C 60 (OH) 20 nanoparticles have demonstrated effective antitumor immunomodulatory effects on immune cells such T cells and macrophages both in vivo and in vitro. For instance, they boosted the production of Th1 cytokines (IL-2, IFN-γ, and TNF-α), which helped to kill tumor cells through increased production of CD4+/CD8+ lymphocytes . Another fullerene-based nanomaterial derivative is fullerol (C 60 (OH) x ). C 60 (OH) x was investigated for tumor-inhibitory activity in the H22 hepatocarcinoma mouse model. C 60 (OH) x improved the phagocytotic activity of peritoneal macrophages. Additionally, C 60 (OH) x -treated macrophages generated more tumor necrosis factor alpha in vitro, implying that C 60 (OH) x can boost innate immunity in tumor-bearing mice, thereby limiting the development of tumors .
To achieve the desired cancer immunotherapy, polymer-coated CDs were evenly inserted into an ordered framework of mesoporous silica nanoparticles (CD@MSNs). The acquired CD@MSN was not biodegradable, but it could perform photothermal imaging-guided PTT in vivo. Interestingly, it was discovered that CD@MSN-mediated PTT could achieve immune-mediated prevention of tumor metastasis by promoting the proliferation and activation of NKs and macrophages while upregulating the production of cytokines such as IFN-γ and granzyme B. This study offered an unconventional method of producing biodegradable mesoporous silica and gave novel insights into the anticancer immunity associated with biodegradable nanoparticles . To create chiral nanovaccines for cancer immunotherapy, researchers have used chiral CDs as carriers and adjuvants, as well as ovalbumin (OVA) as an antigen model. This design was efficiently internalized by bone marrow-derived DCs (BMDCs) from mice. Because of their fluorescence, chiral CDs could measure cellular absorption noninvasively and elicit a robust immunological response with increased BMDC maturation, T cell proliferation, and cytokine release. Second, it elicited a robust antitumor T-cell-mediated immune response and suppressed the development of B16-OVA melanoma tumors implanted in C57BL/6 mice. In vitro tests revealed that chiral CDs had a comparable capacity to LPS for inducing the maturation of BMDCs. This study proposed a novel method for producing multifunctional nanovaccines for enhanced cancer treatment . As vaccine adjuvants, photoluminescent CDs were coupled with the model tumor protein antigen ovalbumin (OVA). These CDs greatly enhanced antigen absorption and the maturation of DCs. The CD–OVA nanocomposite dramatically enhanced the levels of the costimulatory molecules CD80 and CD86, which were uses as markers of DCs’ maturation. In addition, DCs produced more tumor necrosis factor (TNF-α). Furthermore, CD–OVA was demonstrated to significantly boost the proliferation of splenocyte and the production of IFN-γ. Interestingly, this CD–OVA vaccination was successfully endocytosed and processed by immune cells in vivo, resulting in significant antigen-specific cellular immune responses that inhibited the development of B16-OVA melanoma cancer in C57BL/6 mice .
Another type of nontoxic CNM, fluorescent nanodiamonds (FNDs), was used to excite NKs and monocytes as an approach to boost antitumor activity. The absorption of FNDs and immune cell activation were significantly dose-dependent, as evaluated by the increased production of monocyte-derived TNF-α and NK cell-derived IFN-γ. Following subcutaneous injection, FNDs were seen in wild-type BALB/c mice . To boost the DC-driven anti-GBM immune response, doxorubicin–polyglycerol–nanodiamond composites (nano-DOX), a potent inducer of DAMPs was created. In vitro, nano-DOX stimulated both human and animal DCs to inhibit glioblastoma cancer cells. Furthermore, nano-Dox promoted the infiltration and activation of mouse bone marrow-derived DCs as well as lymphocytes into glioblastoma xenografts. This suggested that administering nano-DOX through DCs might increase GC immunogenicity and elicit an anticancer immune response in GBM . In summary, CNMs have emerged as a promising novel therapeutic platform to influence the immune response, particularly in cancer immunotherapy, due to their inherent characteristics and functionalization, targeted drug administration, and interactions with immune cells. CNMs may deliver carrier materials to particular cells, such as vaccines to APCs, to stimulate the immune system significantly. CNMs also permit stimulus responses in different cancer cells for combination cancer immune therapy. Furthermore, in order to activate the host immune response significantly and safely against cancer cells, CNMs have been engineered to transport antigens and chemotherapeutic agents to tumor cells. Through their combined application with chemotherapy, phototherapy, or radiotherapy, CNMs may elicit the activation of T cells against tumor cells and enhance anticancer efficacy with lesser toxicity . Owing to the intrinsic high absorption of NIR by several CNMs, including CNTs and graphene-derived materials such as GO, the pairing of CNMs with phototherapy has gained popularity in recent years . We have summarized the immunomodulatory properties of CNMs in preclinical cancer studies in .
The outcomes of research on the toxicological profiles of CNMs have shown both the cellular toxicities and immunological impact. We highlight below in the reported adverse effects of CNMs based on in vivo tumor models. Impurities, particularly catalyst metal contaminants such as Fe, Y, Ni, Mo, and Co added during synthesis and the purifying methods, contribute to CNTs’ toxicity. The existence of metal contaminants may result in contradictory findings about the biological properties, safety, and risk of CNTs, limiting their future practical uses . For example, nickel oxide in SWCNTs may influence the redox characteristics of the regulatory peptide l-glutathione, a potent antioxidant that protects cells against oxidative stress . Additionally, in one study, SWCNTs with varying metal contents were intratracheally administered into the lungs of spontaneously hypertensive rats (0.6 mg/rat) given once a day for two days in a row. This resulted in immediate and severe lung problems, including pulmonary inflammation, oxidative stress, and toxicity, as shown by the cell counts, MPO, LDH, albumin, protein, TNF-α, IL-6, MIP-2, CC16, and HO-1 data. Metal impurity-rich SWCNTs elicited much more negative reactions. After the injection of SWCNTs, the predominant lung histological abnormalities were pulmonary inflammation, multifocal granuloma development, and diffused CNT particle deposition in the alveoli, as well as bronchilocal cell hypotropy . In another study, B6C3F1 mice were given 0.5 mg of raw or refined carbon nanotubes by intratracheal instillation. The CNTs caused dose-dependent epithelioid granulomas and, in some cases, interstitial inflammation, causing lung lesions. Some mice’s lungs also displayed peribronchial inflammation and necrosis that spread into the alveolar septa. In addition, fatigue, inactivity, and weight loss were reported 4 to 7 days after the CNTs were implanted. Because unprocessed nanotubes are so light, they might become airborne and potentially enter the lungs. CNTs may be far more dangerous after they enter the lungs, and are considered a serious occupational health issue in chronic inhalation exposures . Using a different route of exposure, C57BL/6 mice were intraperitoneally injected with each CNTs, resulting in a total exposure of 50 g per mouse. A comparison of the inflammatory reactions of these numerous forms of CNTs, including three distinct types of MWCNTs and one type of SWCNT, revealed that peritoneal CNT injections of long and thick MWCNTs generated significant inflammatory effects. Furthermore, a sensitive approach for detecting DNA damage at the level of the individual eukaryotic cell was applied, which revealed considerable DNA damage in vitro . It is also important to note that various CNMs, including SWCNTs, MWCNTs, and fullerene (C 60 ), were evaluated for their toxicity. At a low dosage of 0.38 µg/cm 2 , SWCNTs dramatically inhibited the phagocytosis of alveolar macrophages (AM) derived from adult pathogen-free healthy guinea pigs, but MWCNT10 and C 60 produced damage only at a high dose of 3.06 µg/cm 2 . Furthermore, macrophages treated with SWCNTs or MWCNT10 at 3.06 µg/cm 2 displayed characteristics of necrosis and degeneration. This showed that the dose-dependent cytotoxicity mechanisms of SWCNTs and MWCNT10 are distinct . Of note, the acute toxicity of C 60 fullerene in mice was studied 14 days after a single intraperitoneal dosage was later on evaluated. C 60 fullerene had no harmful impact in the dosage range of 75–150 mg/kg; the toxic effect of C 60 fullerene was detected at concentrations of 300 mg/kg and above, and it was associated with behavioral disruption, hematotoxicity, and pathomorphological abnormalities in the spleen, hepatic, and renal tissues in mice. A dosage range of 75–150 mg/kg of a C 60 fullerene aqueous colloid solution was found to be safe and might be used for biological reasons . The immunological characteristics of oxidized water-dispersible MWCNTs in normal BALB/c mice were also examined after a subcutaneous injection of MWCNTs. The dynamic fluctuation in C3 and C5a levels in the serum suggested that this mode of delivery promoted activation of the complement quickly after the MWCNT injection. The MWCNTs activated the complement and produced proinflammatory cytokines such as interleukin (IL)-17, I-TAC, IL-1β, and IFN- γ early on. However, the complement and cytokines levels reverted to the baseline over time. There was no evident buildup of MWCNTs in the liver, spleen, kidney, or heart, with the exception of the lymph nodes. Histological examinations revealed just a modest inflammatory reaction at the injection site, with no granulomas identified over time. These findings contradicted previous findings when carbon nanotubes were administered intratracheally or intraperitoneally. Hence, these findings showed that administering MWCNTs subcutaneously was safer than administering them systemically . Finally, the effects of GO and reduced graphene oxide (rGO) on glioma tumor cells directly implanted in models of chicken embryo chorioallantoic membranes were studied. The malignancies were removed after three days for additional examination. At a concentration of 100 µg/mL, increased quantities of GO and rGO resulted in reduced cell proliferation, viability, and cell organelle damage in glioma tumor cells. The findings showed that the interaction of GO and rGO with the glioma cells in tumors, which resulted in severe toxicity, was dependent on the shape of the graphene’s surface .
Given the promising data of CNMs, alone or as drug carriers, to modulate the immune response, as discussed above, they have great potential for clinical applications, such as cancer therapy. However, there are important concerns and challenges. In , we summarize the current issues to be overcome before their successful translation from the laboratory to the clinic. Primarily, because the long-term consequences of CNMs are time-consuming to investigate, they have rarely been documented. Next, additional research into a more precisely regulated production procedure for some of these CNMs is still required. Materials derived by various synthesis techniques typically have highly varied characteristics and, as a result, distinct biomedical properties. For example, graphene with a single or few layers is typically required for biological applications that require a more regulated production technique . A variety of surface modifiers and biomacromolecules have been created to enhance the characteristics of CNMs. The functionalization improves the efficacy of their application in the field of biomedicine such as immuno-oncology . However, some functionalizing agents may produce undesired effects in the process. For example, PEG is commonly used to functionalize CNMs such as GO. PEG has been shown to be immunogenic, interfering with the effects of the given antigen . Another aspect to consider is the presence of pre-existing anti-PEG antibodies in the blood of some healthy donors. In fact, anti-PEG antibodies have been proven to affect the therapeutic effectiveness and safety of PEGylated medicines . Additionally, the size, shape, and chemical surface composition of several nanomaterials can determine the impact of their immunological regulation. This creates the unprecedented unpredictability of any new potential nanomedicine in any clinical trial . This implies that there could be inconsistent results in clinical trials in cases where the size, shape, or surface compositions are slightly distorted. Nanomaterials can be used not only as drug carriers but also as immunomodulators of certain biochemical processes. This poses a difficulty in measuring the effects induced by either the CNM or the antigen being delivered. Furthermore, polyhydroxylated fullerenols were reported to have immunosuppressive effects on macrophages and T cells. Such effects included tilting the cytokine balance, favoring the release of Th1 cytokines and reducing the secretion of Th2 cytokines . In a study conducted by Schrand et al., CNMs demonstrated both material- and cell-specific cytotoxicity. In fact, a general trend for biocompatibility with the susceptibility of macrophages to cytotoxicity involving nanodiamonds, MWCNTs, SWCNTs, and carbon black particles was found. Indeed, macrophages were shown to be more susceptible to cytotoxicity compared with neuroblastoma cells . Because malignancies develop in a variety of cells across the body, this might be a significant translational hurdle for clinical trials.
Thanks to their unique physicochemical properties and, perhaps more importantly, their interactions with the immune system, CNMs have offered new approaches for the enhancement of immune-based therapies against cancer. In this review, we have discussed the immunomodulatory mechanisms of CNMs and have highlighted the current status of preclinical applications of CNMs in cancer therapy . Before the full potential can be achieved for this therapeutic modality, we have highlighted the adverse in vivo effects, as well as translational challenges of CNMs in oncological studies ( and ). One of the attractive properties of CNMs is their use as effective platforms for drug delivery and targeting. The flexibility of chemically modifying CNMs provides another layer of the capacity of CNMs to create new systems that can be adapted to specific interventions. For example, CNMs have been used for targeted drug delivery to enhance the efficacy of other treatments, such as chemotherapy. In addition, the covalent and noncovalent functionalization of CNMs with different biomolecules, drugs, or antibodies allows their selective accumulation in tumors. Therefore, one of the directions in the future is the combinations of CNMs and current treatment regimens, although this is still in the early stage. To move forward, one challenge is to identify the population of patients who are most likely respond to the therapy. In this regard, further investigation is required to identify the biomarkers that predict the maximal synergy of CNMs with the current therapies. In the near future, it will be interesting to see the discovery of novel biomarkers for the therapeutic response of CNM-based combination treatments and the best approaches to manipulating the immune response in favor of antitumor benefits as medicine moves towards the development of custom-tailored precision therapies.
|
Enriched Graphene Oxide-Polypropylene Suture Threads Buttons Modulate the Inflammatory Pathway Induced by
|
c3f492a6-252d-4efa-846d-16cf6a443bd8
|
10095426
|
Suturing[mh]
|
Graphene, a two-dimensional (2D) nano-structure containing sp 2 carbon atoms, is a building block of several carbon-based materials, including graphite, bucky balls, and carbon nanotubes . Graphene was discovered in 2004, and it appeared as a promising nanomaterial due to its catalytic, optical, and electrical properties as well as remarkable physical properties such as a large specific surface area and mechanical strength . In the medical and biological fields, the usefulness of graphene and its derivatives is due to their ability to improve the biocompatibility of various materials already used in tissue engineering . For example, the high aspect ratio, planar structure, flexibility, and hybridization of carbon atoms of graphene help to increase some material properties such as stability , strength , and electric conductivity . Graphene oxide (GO) is a nanomaterial derived from the oxidation of graphene, already used in countless electronic, environmental, medical, and biological applications . GO is a layer of carbon atoms organized to form a series of hexagons, and unlike graphene, it has functional groups such as hydroxyl (–OH) and epoxy (C-O-C) groups bonded to the underlying graphene plane, while the edges of the sheet are functionalized with carboxylic groups (–COOH) . Due to the presence of these chemical groups, GO keeps some properties typical of graphene, such as strength, high mechanical stiffness, transparency, and flexibility . Moreover, it can be easily dispersed in water, and its functional groups allow easy further functionalization or grafting. Studies have proven that graphene and GO are highly biocompatible with low toxicity levels and excellent cytocompatibility , which enhance their use as a support for tissue regeneration, cell growth, and cell differentiation , at least for the concentration of 10 μg/mL or lower . The innumerable properties of graphene and its derivatives have prompted biomedical research to evaluate the possible application of these materials in the medical field and to study their interaction with the biological system . Over the years, several studies have highlighted how these nanomaterials play a key role in the modulation of biological processes such as inflammation and apoptosis , and it was also evidenced that they promote cell adhesion, cell growth, and antibacterial activity . The antibacterial activity of GO includes different mechanisms, such as membrane damage due to sharp edges and oxidative stress and bacterial cell wrapping . The biological effect of GO is still not well understood, and different mechanisms have been highlighted depending on the physico-chemical features of GO, its functionalization and dimensions or the tested material in which it is embedded, and the type of investigated cells . The mechanism of cell GO interactions ranges from masking, piercing, rippling, pore formation (generally via membrane lipid extraction), electron transfer, and cation chelation, and it may or not provide internalization into cells. Moreover, differences in cell culture conditions may come into play . The wide increase of studies regarding the properties of GO is strongly encouraged by the fact that graphene is renewable as it can be easily obtained by renewable sources such as lignin , is cheap, easily functionalized, and is a one-atom-thick molecule. It is effective at very low concentrations. The innumerable properties of the GO make its association with various biomaterials an interesting approach in the tissue engineering field. To date, different materials, including many types of suture threads, such as polyglycolic acid (PGA) multifilament surgical and chitin monofilament absorbable surgical sutures, have been functionalized with GO to improve the suture surface wettability and its tensile strength . Among the materials used in the production of sutures, polypropylene (PP) is one of the most commonly used. The PP suture is generally a non-absorbable monofilament formed by the catalytic polymerization of propylene. This polymer gives the suture long-term tensile strength superior to that of other materials, such as nylon. For this reason, it is safe in many applications, such as general surgery, as well as in procedures of vascular and cardiac surgery . In the present work, polypropylene suture thread buttons (PPSTBs) obtained from the same material used to produce polypropylene suture threads have been utilized for easy handling. In detail, PPSTBs have been obtained from Assut Europe S.p.A company (Magliano De Marsi (AQ), Italy) by mixing PP enriched with GO at two different concentrations. Lipopolysaccharide (LPS) is present in the membrane of Gram-negative bacteria, such as E. coli , and is also called endotoxin. Bacterial endotoxins are involved in the pathogenesis of Gram-negative sepsis. Infections following injuries, burns, or surgery can lead to the growth of bacterial endotoxins, such as those produced by E. coli in the bloodstream . The endotoxin of E. coli , in contact with the cells, causes the release of pro-inflammatory cytokines after the activation of the Toll-like receptor (TLR) 2 and TLR4, thus stimulating an immune–inflammatory response . The major part of Gram-negative bacteria is recognized to induce the production of pro-inflammatory cytokines principally through TLR4 and nuclear factor-κB (NF-κB) pathways . As reported by Pansani T.N. et al., the TLR4 pathway activation is involved in the production of pro-inflammatory cytokines as interleukin-6 (IL-6) and interleukine-8 (IL-8), which was also observed in gingival fibroblasts stimulated with E. coli LPS (LPS-E) . Moreover, LPS-E induced a higher expression of inducible nitric oxide (iNOS), IL-6, and monocyte chemotactic protein-1 (MCP-1) in an in vitro model of gingival fibroblast cells stimulated with E. coli rather than gingival fibroblasts stimulated with P. gingivalis LPS . Based on this knowledge, the purpose of the current work was to analyze the biological effects of PPSTBs enriched with two different concentrations of GO in an in vitro model of primary human gingival fibroblasts (hGFs) to evaluate the potential protective role of PPSTBs functionalized with GO in the inflammatory process through modulation of the TLR4/MyD88/NFκB p65/NLRP3 pathway.
2.1. PPSTBs, PPSTBs-GO 5 μg/mL, and PPSTBs-GO 10 μg/mL Characterization In , AFM topographical micrographs as well as DMT modulus channels of bare and GO-enriched PPSTBs composites, were reported. By using the Peak Force QNM mode, Young’s elastic modulus for the three samples was obtained. Mean values of Young’s modulus of 4.22 ± 1.49 GPa and 4.39 ± 0.99 GPa were recorded for the PPSTBs and PPSTBs-GO 5 μg/mL samples, respectively, whereas Young’s elastic modulus of 8.40 ± 1.39 GPa was obtained for PPSTBs-GO 10 μg/mL samples. The diffraction pattern of PPSTBs and PPSTBs-GO, reported in , showed typical diffraction peaks of PP in the 2 ϴ range comprised between 10° and 30°. These peaks were related to the crystalline phase of the isotactic PP (i-PP) located at 14°, 17°, 18.5°, 21°, and 22° corresponding to the indexed planes of the monoclinic crystals of the α-form of i-PP (110), (040), (130), (111), and (131) + (041), and to the trigonal crystals of the β-form at 16° and 21° corresponding to the indexed reflections of (300) and (301), respectively . The absence of peaks connected to GO ((001) peak that typically appears between 9–12° 2 ϴ ) in the diffraction patterns of PPSTBs-GO 5 μg/mL and PPSTBs-GO 10 μg/mL indicated that the nanocomposites did not possess layered GO. The addition of GO, even at the highest investigated concentration, did not significantly alter the diffraction pattern of PP. The only difference in the diffraction pattern upon enrichment with GO was the disappearance of the β phase, likely due to a different cooling rate in the crystallization region or nucleation of β crystallites . 2.2. Cell Viability Assay MTS assay was performed on hGFs, hGFs + PPSTBs, hGFs + PPSTBs-GO 5 μg/mL, and hGFs + PPSTBs-GO 10 μg/mL cultured with or without LPS-E at 24, 48, and 72 h ( ). Cell viability was significantly increased in the samples with PPSTBs-GO 5 μg/mL and PPSTBs-GO 10 μg/mL compared to PPSTBs and CTRL samples. The cell metabolic activity increased in hGFs with PPSTBs functionalized with GO with or without the LPS-E treatment. 2.3. hGFs Morphological Analysis After 24 h of LPS-E treatment, the morphology of hGFs alone or cultured with PPSTBs, PPSTBs-GO 5 μg/mL, and PPSTBs-GO 10 μg/mL were observed using an inverted light microscope and SEM. No morphological differences have been observed among all the experimental conditions at the inverted light microscope ( A1–D2). The SEM images showed that hGFs adhere equally on PPSTBs, PPSTBs-GO 5 μg/mL, and PPSTBs-GO 10 μg/mL both in the presence or in the absence of LPS-E treatment ( ). 2.4. GO-Enriched PPSTBs Influence Protein Expression Evidenced by CLSM and Western Blot Analyses The immunofluorescence images reported the expression levels of TLR4/MyD88/NFκB p65/NLRP3 in hGFs untreated cells, in hGFs cultured with PPSTBs, in hGFs cultured with PPSTBs enriched with GO at 5 μg/mL, in hGFs cultured with PPSTBs enriched with GO at 10 μg/mL, in hGFs stimulated with LPS-E, in hGFs cultured with PPSTBs and stimulated with LPS-E, in hGFs cultured with PPSTBs enriched with GO at 5 μg/mL and stimulated with LPS-E and in hGFs cultured with PPSTBs enriched with GO at 10 μg/mL and stimulated with LPS-E. The results showed that the TLR4/MyD88/NFκB p65/NLRP3 pathway was expressed significantly in hGFs treated with LPS-E alone or in hGFs cultured with PPSTBs and LPS-E for 24 h compared to the untreated cells. Moreover, TLR4/MyD88/NFκB p65/NLRP3 level expression was less expressed in the cells cultured with PPSTBs enriched with GO and LPS-E compared to hGFs treated with LPS-E alone or in hGFs cultured with PPSTBs and LPS-E. The hGFs cultured with PPSTBs enriched with GO at 10 μg/mL had a comparable level of expression of TLR4/MyD88/NFκB p65/NLRP3 with respect to the CTRL sample group ( , , and ). The results obtained by Western blot were comparable to those obtained by confocal immunofluorescence ( ). 2.5. Genes Expression Histogram showed the gene expression of TLR4/MYD88/RELA/NLRP3 and FN1/VIM/VCL/PTK2/ITGA5/ITGA1 evaluated by Real-Time PCR ( and ). The hGFs treated with LPS-E reported a significantly higher gene expression of TLR4/MYD88/RELA and NLRP3 compared to the untreated cells. Moreover, hGFs cultured with PPSTBs enriched with GO at 5 μg/mL and 10 μg/mL and stimulated with LPS-E evidenced a remarkably lower gene expression compared to hGFs stimulated with LPS-E confirming the qualitative results obtained by CLSM observations and Western blot analysis ( , , , and ). Conversely, the expression of the FNT1/VIM/VCL/PTK2/ITGA5 and ITG1b genes was significantly lower in the hGFs and PPSTBs samples compared to the PPSTBs samples enriched with both GO concentrations. The same results were shown in the samples treated with LPS-E ( a–f).
In , AFM topographical micrographs as well as DMT modulus channels of bare and GO-enriched PPSTBs composites, were reported. By using the Peak Force QNM mode, Young’s elastic modulus for the three samples was obtained. Mean values of Young’s modulus of 4.22 ± 1.49 GPa and 4.39 ± 0.99 GPa were recorded for the PPSTBs and PPSTBs-GO 5 μg/mL samples, respectively, whereas Young’s elastic modulus of 8.40 ± 1.39 GPa was obtained for PPSTBs-GO 10 μg/mL samples. The diffraction pattern of PPSTBs and PPSTBs-GO, reported in , showed typical diffraction peaks of PP in the 2 ϴ range comprised between 10° and 30°. These peaks were related to the crystalline phase of the isotactic PP (i-PP) located at 14°, 17°, 18.5°, 21°, and 22° corresponding to the indexed planes of the monoclinic crystals of the α-form of i-PP (110), (040), (130), (111), and (131) + (041), and to the trigonal crystals of the β-form at 16° and 21° corresponding to the indexed reflections of (300) and (301), respectively . The absence of peaks connected to GO ((001) peak that typically appears between 9–12° 2 ϴ ) in the diffraction patterns of PPSTBs-GO 5 μg/mL and PPSTBs-GO 10 μg/mL indicated that the nanocomposites did not possess layered GO. The addition of GO, even at the highest investigated concentration, did not significantly alter the diffraction pattern of PP. The only difference in the diffraction pattern upon enrichment with GO was the disappearance of the β phase, likely due to a different cooling rate in the crystallization region or nucleation of β crystallites .
MTS assay was performed on hGFs, hGFs + PPSTBs, hGFs + PPSTBs-GO 5 μg/mL, and hGFs + PPSTBs-GO 10 μg/mL cultured with or without LPS-E at 24, 48, and 72 h ( ). Cell viability was significantly increased in the samples with PPSTBs-GO 5 μg/mL and PPSTBs-GO 10 μg/mL compared to PPSTBs and CTRL samples. The cell metabolic activity increased in hGFs with PPSTBs functionalized with GO with or without the LPS-E treatment.
After 24 h of LPS-E treatment, the morphology of hGFs alone or cultured with PPSTBs, PPSTBs-GO 5 μg/mL, and PPSTBs-GO 10 μg/mL were observed using an inverted light microscope and SEM. No morphological differences have been observed among all the experimental conditions at the inverted light microscope ( A1–D2). The SEM images showed that hGFs adhere equally on PPSTBs, PPSTBs-GO 5 μg/mL, and PPSTBs-GO 10 μg/mL both in the presence or in the absence of LPS-E treatment ( ).
The immunofluorescence images reported the expression levels of TLR4/MyD88/NFκB p65/NLRP3 in hGFs untreated cells, in hGFs cultured with PPSTBs, in hGFs cultured with PPSTBs enriched with GO at 5 μg/mL, in hGFs cultured with PPSTBs enriched with GO at 10 μg/mL, in hGFs stimulated with LPS-E, in hGFs cultured with PPSTBs and stimulated with LPS-E, in hGFs cultured with PPSTBs enriched with GO at 5 μg/mL and stimulated with LPS-E and in hGFs cultured with PPSTBs enriched with GO at 10 μg/mL and stimulated with LPS-E. The results showed that the TLR4/MyD88/NFκB p65/NLRP3 pathway was expressed significantly in hGFs treated with LPS-E alone or in hGFs cultured with PPSTBs and LPS-E for 24 h compared to the untreated cells. Moreover, TLR4/MyD88/NFκB p65/NLRP3 level expression was less expressed in the cells cultured with PPSTBs enriched with GO and LPS-E compared to hGFs treated with LPS-E alone or in hGFs cultured with PPSTBs and LPS-E. The hGFs cultured with PPSTBs enriched with GO at 10 μg/mL had a comparable level of expression of TLR4/MyD88/NFκB p65/NLRP3 with respect to the CTRL sample group ( , , and ). The results obtained by Western blot were comparable to those obtained by confocal immunofluorescence ( ).
Histogram showed the gene expression of TLR4/MYD88/RELA/NLRP3 and FN1/VIM/VCL/PTK2/ITGA5/ITGA1 evaluated by Real-Time PCR ( and ). The hGFs treated with LPS-E reported a significantly higher gene expression of TLR4/MYD88/RELA and NLRP3 compared to the untreated cells. Moreover, hGFs cultured with PPSTBs enriched with GO at 5 μg/mL and 10 μg/mL and stimulated with LPS-E evidenced a remarkably lower gene expression compared to hGFs stimulated with LPS-E confirming the qualitative results obtained by CLSM observations and Western blot analysis ( , , , and ). Conversely, the expression of the FNT1/VIM/VCL/PTK2/ITGA5 and ITG1b genes was significantly lower in the hGFs and PPSTBs samples compared to the PPSTBs samples enriched with both GO concentrations. The same results were shown in the samples treated with LPS-E ( a–f).
GO plays a pivotal role in the biological and medical field, as well as in tissue repair, due to its ability to enhance cell adhesion, proliferation, and differentiation. In addition, GO possesses anti-inflammatory and antibacterial properties. As reported by Radunovic et al., many biomaterials, such as titanium disks and collagen membranes functionalized with GO, showed a reduced bacterial biofilm formation when compared with non-functionalized biomaterials . AFM was used to characterize the surface morphology of the samples. AFM height images of PPSTBs-GO 5 μg/mL ( E) and PPSTBs-GO 10 μg/mL ( H) samples showed a less uniform morphology compared to the PPSTBs sample ( B). Indeed, the dispersion of GO in PP comprises the establishment of new interactions between PP and GO that require the breaking of PP intermolecular interactions. This rearrangement implies a complete reorganization of PP molecules around GO sheets and may alter the apparent morphology of the PPSTBs-GO compared to that of pure PPSTBs. Nevertheless, no relevant differences in surface roughness were observed on GO-enriched samples in comparison with PPSTBs without GO. As far as the stiffness is concerned, the obtained Young’s elastic modulus values demonstrate that the addition of GO at a concentration of 5 μg/mL did not influence the stiffness characteristics of the starting material. On the contrary, the increase in the elastic modulus of PP composites in the presence of 10 μg/mL was well-defined, and it can be attributed to stress transfer from the polymer matrix to well-dispersed strong GO sheets, enhancing the mechanical properties of the material. Similarly, XRD analysis does not evidence the presence of aggregated/layered GO, confirming the good dispersion of the GO in the PP matrix. The disappearance of the β phase in PPSTBs enriched GO agrees with the different mechanical properties observed by AFM, at least for the highest investigated concentration of GO . In the present work, the biological effects of PPSTBs with GO in an in vitro model of hGFs were evaluated in the inflammatory process through modulation of the TLR4/MyD88/NFκB p65/NLRP3 pathway. Toll-like receptors family (TLRs) are receptors present on cell surfaces or in internal compartments such as ERs, endosomes, and lysosomes. These are formed by an ectodomain responsible for the recognition of pathogen-associated molecular patterns (PAMPs) and damage-associated molecules patterns (DAMPs) by a transmembrane domain and a cytoplasmic domain Toll/IL-1 receptor (TIR), which intervenes in the activation of downstream signaling . LPS binds and activates TLR4 through the formation of a complex composed of LPS binding protein (LBP) and accessory proteins CD14 and MD2 . In turn, the activated TLR4 binds myeloid differentiation factor 88 (MYD88), which activates the intracellular signaling cascade that ends with the phosphorylation of the inhibitors serine residues of the transcription regulator nuclear factor kappa B (NFκB) . The activated form of NFκB is translocated from the cytoplasm to the nucleus, where it binds specific DNA elements and regulates the transcription of target genes resulting in increased IL-18, IL-6, IL-1β, tumor necrosis factor-α (TNF-α), and MCP-1 . Based on the literature, the stimulation with LPS is responsible for the activation of the TLR4/NFκB pathway, which is involved in the increment of NOD-Like Receptor Protein 3 (NLRP3), a component of the NOD-like receptors that form the inflammasome complex . The inflammasome is a group of intracellular protein complexes that assemble in response to PAMPs or DAMPs and induces the inflammatory reaction through the activation of caspase 1 . Moreover, it has been demonstrated that LPS induces intracellular ROS and promotes the differentiation of M1 macrophages, which are key effector cells for the elimination of pathogens, virally infected, and cancer cells . Our in vitro data suggest that both hGFs cells alone and cultured with PPSTBs show an increment in the expression of the inflammatory mediators TLR4/Myd88/NFκB p65/NLRP3 when treated with LPS-E. Instead, hGFs cells cultured with PPSTBs enriched with 5 μg/mL or 10 μg/mL of GO showed a significant reduction of TLR4/Myd88/NFκB p65/NLRP3 level expression. The reduction of these inflammatory mediators observed in hGFs cultured with PPSTBs enriched with both concentrations of GO showed that GO could be responsible for modulating the inflammatory process. The results are particularly relevant because the concentration of GO in these PPSTBs is very low (<0.1%). Moreover, to further support the data obtained, the gene expression of the principal markers involved in cell adhesion, such as Fibronectin, Vimentin, Vinculin, Focal Adhesion Kinase (FAK), and Integrin α5β1, was also investigated. Cell adhesion to extracellular matrix (ECM) proteins is essential for regenerative processes and for maintaining tissue homeostasis as well as for wound healing processes . Cell adhesion is a fundamental biological event that defines cell and tissue morphogenesis by intervening in the modulation of cell differentiation, cycle, and survival. The adhesion proteins, the main players of this event, are membrane receptors that allow the cells to arrange themselves three-dimensionally to form the tissue and allow their interaction with the surrounding environment. The affinity of the cells for the biomaterial substrate depends on the ECM molecules and represents a key factor for the development of the biomaterial . In our study, an increase in the gene expression of the principal markers involved in cell adhesion processes was detected in hGFs untreated and treated with LPS-E and cultured with PPSTBs-GO 5 μg/mL and PPSTBs-GO 10 μg/mL. In detail, a significant increase of FNT1/VIM/VCL/PTK2/ITGA5 and ITG1b transcribing, respectively, for Fibronectin, Vimentin, Vinculin, Focal Adhesion Kinase (FAK), and Integrin α5β1 underlines that GO added to PPSTBs promotes cell-to-cell interactions and cell interactions with the surrounding environment. Fibronectin is an ECM protein involved in cell adhesion, spreading, migration, proliferation, and apoptosis . Its interaction with the heterodimeric cell surface glycoprotein regulates the mechanical anchoring and the formation of focal cell–cell and cell–material adhesion contacts . Specifically, Integrin α5β1 is reported to be highly expressed in human fibroblasts, promoting their motility and survival , as well as in hGFs . The interaction of Fibronectin with Integrins determines a receptor conformational change and its consequent activation resulting in a mechanical coupling to the ligand. Successively, the receptors form an adhesion complex containing structural proteins, such as Vinculin, and signaling molecules, such as FAK involved in the association with cytoskeletal actin and cell anchoring as well as in sending signals relating to ECM . Vimentin, known as one of the principal proteins of cell intermediate filaments, is reported to enhance integrin α5β1 binding fibronectin and to improve cell–cell interactions through its association with hemidesmosomes and desmosomes . Despite the limitations of the present in vitro study, relevant and positive outcomes have been obtained. Taken together, these findings highlight the anti-inflammatory effects and the capacity of GO to improve cell adhesion abilities, which may play an important role in the early stage of wound healing. Understanding the mechanisms of the release of ECM components and their regulation is essential for developing novel strategies in the field of tissue engineering and regenerative medicine. The biological effects of GO, evidenced by our data, could result in better and faster healing of the tissues treated with suture thread enriched with GO. Consequently, the potential use in the clinical setting of these sutures enriched with GO could reduce hospitalization times of treated patients and limit, thank also to the demonstrated antibiofilm activity , the use of postoperative antibiotic therapies.
4.1. Graphene Oxide (GO) The GO aqueous solution was obtained as a commercial sample from Graphenea (Graphenea, Donostia-San Sebastian, Spain) and already characterized by the manufacturing company in terms of exfoliation (monolayer content > 95%), size (<10 μm), and oxidation degree (Elemental analysis: carbon: 49–56%; oxygen: 41–50%). This characterization is very important because it has been proven that the above-mentioned “biological” properties of GO depend strictly on those features. Due to the good properties of this commercial sample, we decided to use this material and add it as a solid . The commercial aqueous solution of 4 g/L GO was added to Ultrapure MilliQ water (electric resistance > 18.2 MΩ cm −1 ) from a Millipore Corp. model Direct-Q 3 system (Merk, Burlington, Massachusetts, US) in order to reach the concentration of 1 mg/mL, and bath ultrasonicated for 10 min (37 kHz, 180 W; Elmasonic P60H; Elma). The concentration of GO has checked spectrophotometrically at λ max 230 nm by using a Varian Cary 100 BIO UV-Vis spectrophotometer. Dimensions of GO flakes were measured by using dynamic laser light scattering (DLS) (90Plus/BI-MAS ZetaPlus multi-angle particle size analyzer; Brookhaven Instruments Corp., Holtsville, NY, US) in order to check that micrometric GO have been obtained, and ultrasonication did not reduce significantly GO flakes dimensions (see ). In order to further characterize the commercial GO, ζ-potential measurements and Raman spectroscopy (XploRA PLUS, HORIBA, Kyoto, Japan) analyses have been performed (see ). GO dispersion was divided into different aliquots and transferred at −80 °C overnight. After GO aliquots were completely frozen, the samples were placed in a freeze dryer for 48 h, generating black GO sponges. An aliquot of dried GO was redispersed in water and characterized by DLS, ζ-potential (see ), and UV-Vis spectrophotometry. The UV-Vis spectrum was registered in order to observe the dispersion behavior and check the real final concentration of GO after the freeze-drying process ( ). Analogously, Raman spectroscopy analyses were performed on the dry GO sample after lyophilization (see ). The amount of GO necessary for the production of 50 g PPSTBs was 5 mg GO for 5 μg/mL samples and 10 mg GO for 10 μg/mL samples. 4.2. GO-Enriched PPSTBs PPSTBs of pure PP were enriched with two different concentrations of GO. Briefly, 50 g of PPSTBs were dissolved at the temperature of 160 °C and thoroughly mixed with 5 mg GO for 5 μg/mL samples and 10 mg GO for 10 μg/mL samples, respectively. As the final step, the molten product was placed in molds to create the PPSTBs (produced and furnished by Assut Europe S.p.A). 4.3. PPSTBs, PPSTBs-GO 5 μg/mL, PPSTBs-GO 10 μg/mL Characterization The PPSTBs substrates were characterized by AFM using the MultiMode 8 AFM microscope (Bruker, Billerica, MA, USA) equipped with a Nanoscope V controller. The Peak Force Quantitative Nanomechanics (PFQNM) mode was used to map the morphology and to acquire quantitative insight into nanomechanical parameters of PPSTBs substrates, such as Young’s elastic modulus. The PPSTBs and PPSTBs-GO 5 μg/mL samples were mapped using a precalibrated RTESPA-300-30 probe (spring constant 38.904 N/m and resonance frequency of 350.251 kHz), while for PPSTBs-GO 10 μg/mL samples, the precalibrated RTESPA-525-30 cantilever (spring constant 266.124 N/m and resonance frequency of 582.946 kHz) was chosen. The deflection sensitivity of both types of cantilevers was measured against a standard Sapphire 12-M sample, and after the calibration, images of 512 × 512 pixels were collected with scan sizes of 5 × 5 μm. To analyze the images, the Nanoscope Analysis 1.8 software was used . The elastic modulus values were calculated by using the Derjaguin–Muller–Toropov (DMT) model, extracting them from each force-distance curve registered at each point of the scanned surface. XRD analysis was performed using the D2 Phaser X-ray diffractometer apparatus (Bruker, Billerica, MA, USA) with Cu Kα radiation (λ = 0.154 nm, 30 kV, 10 mA) as an X-ray source. Scattered X-ray intensities were collected over a range of scattering angle 2 ϴ = 5° to 50° with a scan velocity of 0.05 2θ s −1 . 4.4. Cell Culture hGFs (PCS-201-018 ATCC, Manassas, VT, USA) were cultured in Fibroblast Basal Medium (ATCC PCS-201-030) in addition to Fibroblast Growth Kit-Low Serum (ATCC PCS-201-041), containing 5 ng/mL of rh FGF b, 7.5 mM of L-glutamine, 50 μg/mL Ascorbic acid, 1 μg/mL of Hydrocortisone Hemisuccinate, 5 μg/mL of rh Insulin and 2% Fetal Bovine Serum. The culture was maintained in an incubator at 37 °C in a humidified atmosphere with 5% CO 2 and 95% air, and when the cells reached 75–80% confluence, subcultures were produced. 4.5. Experimental Study Design The experimental points shown in the following study design were performed in triplicate with hGFs at passage 5. The cells were stimulated with LPS derived from E. coli O55:B5 (LPS-E) (L6529, Sigma-Aldrich, Milan, Italy) - hGFs used as negative control (CTRL); - hGFs cultured with PPSTBs for 24 h; - hGFs cultured with PPSTBs-GO 5 μg/mL for 24 h; - hGFs cultured with PPSTBs-GO 10 μg/mL for 24 h; - hGFs cultured with 5 μg/mL of LPS-E for 24 h; - hGFs cultured with PPSTBs and 5 μg/mL of LPS-E for 24 h; - hGFs cultured with PPSTBs-GO 5 μg/mL and 5 μg/mL of LPS-E for 24 h; - hGFs cultured with PPSTBs-GO 10 μg/mL and 5 μg/mL of LPS-E for 24 h. 4.6. Cell Viability Assay The cell metabolic activity of hGFs, hGFs + PPSTBs, hGFs + PPSTBs-GO 5 μg/mL, and hGFs + PPSTBs-GO 10 μg/mL cultured with or without LPS-E was analyzed through the 3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfo-phenyl)-2H-tetrazolium (MTS) assay (CellTiter 96 ® Aqueous One Solution Cell Proliferation Assay, Promega, Madison, WI, USA). hGFs of each experimental point were seeded at the density of 3.2 × 10 3 cells/well into 96-well plates with Fibroblast Basal Medium (ATCC PCS-201-030) added with Fibroblast Growth Kit-Low Serum (ATCC PCS-201-041) for 24, 48, and 72 h at 37 °C. Then 20 μL/well of MTS dye solution was added to the culture medium, and the plates were incubated for 3 h at 37 °C. The cell viability, defined by formazan salts quantification, was evaluated through absorbance measurements at 490 nm wavelength performed using the Synergy™ HT Multi-detection microplate reader (Biotech, Winooski, VT, USA). The amount of formazan salts detected was directly proportional to the number of live cells in the plate. The MTS assay was executed in three independent experiments . 4.7. Microscope Optical Analysis After 24 h of LPS-E treatment, the morphology of hGFs alone or cultured with PPSTBs, PPSTBs-GO 5 μg/mL, and PPSTBs-GO 10 μg/mL were observed at the inverted light microscope (Leica DMIL, Leica Microsystem) Mag: 10×. 4.8. Scanning Electron Microscopy (SEM) hGFs cells were seeded on PPSTBs, PPSTBs-GO 5 μg/mL, and PPSTBs-GO 10 μg/mL attached to the bottom of a 12-well plate with and without stimulation with LPS-E. After 24 h of culture, cells were fixed for 1 h at 4 °C in 2.5% glutaraldehyde (Electron Microscopy Sciences, EMS, Hatfield, PA, USA), in 0.1 M sodium phosphate buffer (PB), pH 7.3, rinsed three times with PB, and post-fixed for 1 h in 1% aqueous osmium tetroxide (EMS) at 4 °C. The cells were dehydrated through an ethanol series (30%, 50%, 70%, 90%, 95%, and two times 100%) followed by drying in air and carbon. Morphological analysis was carried out using a high-resolution scanning electron microscope (SEM) Regulus 8220 (Hitachi, Ltd., Tokyo, Japan) operated at 1 kV. 4.9. Confocal Laser Scanning Microscope (CLSM) The hGFs were cultured in 8-well culture glass slides (Corning, Glendale, AZ, USA) at the density of 1.3 × 10 4 /well. After 24 h of treatment, the cells were fixed 1 h at room temperature with 4% of paraformaldehyde (PFA) (BioOptica, Milan, Italy) in 0.1 M in PBS (Lonza, Basel, Switzerland). After 3 washes in PBS, the cells were permeabilized with 0.1% Triton X-100 (BioOptica) in PBS for 5–6 min and blocked with 5% of non-fat milk in PBS for 1 h at RT. Successively, the primary antibodies were prepared in 2.5% non-fat milk in PBS and maintained overnight at 4 °C. The primary antibody used in this study were all purchased from Santa Cruz Biotechnology (Dallas, TX, USA) and were used, as suggested by their datasheet, at the concentration of 1:200: TLR4 (sc-293072), anti-MyD88 (sc-74532), anti-NFκB p65 (sc-8008), anti-NLRP3 (sc-134306). The secondary antibody Alexa Fluor 568 red fluorescence-conjugated goat anti-mouse (A11031, Invitrogen, Eugene, OR, USA) has been prepared 1:200 in 2.5% non-fat milk in PBS and added 1 h at 37 °C. The cytoskeleton actin and the nuclei have been stained, respectively, with Alexa Fluor 488 phalloidin green fluorescent conjugate (A12379, Invitrogen) and TOPRO (T3605, Invitrogen), both prepared 1:200 in 2.5% non-fat milk in PBS and maintained 1 h at 37 °C. The images were acquired through Zeiss LSM800 confocal system (Carl Zeiss, Jena, Germany) . 4.10. Western Blotting Analysis The lysates of hGFs (50 μg) underwent electrophoresis and were moved to a polyvinylidenfluoride (PVDF) membrane using a SEMI-dry blotting apparatus (Bio-Rad Laboratories Srl, Milan, Italy). Successively, the membranes were blocked in 5% non-fat milk in PBS 0.1% Tween-20 (Sigma-Aldrich) and then incubated overnight at 4 °C with the following primary antibodies: anti-TLR4 (1:500) (sc-293072, Santa Cruz Biotechnology), anti-MyD88 (1:500) (sc-74532, Santa Cruz Biotechnology), anti-NFκB p65 (1:500) (sc-8008, Santa Cruz Biotechnology), anti-NLRP3 (1:500) (sc-134306, Santa Cruz, Biotechnology), and β-actin as loading control (1:750) (sc-47778, Santa Cruz Biotechnology). After five washings with PBS 0.1% Tween-20, the membranes were incubated for 1 h at room temperature with peroxidase-conjugated secondary antibody goat anti-mouse (A90-116P, Bethyl Laboratories Inc., Montgomery, TX, USA) 1:5000 diluted in 2.5% no-fat milk in PBS and 0.1% Tween-20%. The expression levels of the proteins were detected using the enhanced chemiluminescence exposure process (ECL) (Amersham Pharmacia Biotech, Milan, Italy) with an image documenter Alliance 2.7 (Uvitec, Cambridge, UK). The detected signals were analyzed by ECL enhancement and assessed through UVIband-1D gel analysis (Uvitec). The data obtained were normalized with values assessed by densitometric analysis of the β-actin protein. The Western blotting analysis was executed in three independent experiments . 4.11. RNA Isolation and Real-Time RT-PCR Analysis TLR4, MyD88, NFκB p65, and NLRP3 mRNA expression were analyzed by Real-Time PCR. Total RNA was extracted using PureLink RNA Mini Kit (Ambion, Thermo Fisher Scientific, Milan, Italy) according to the manufacturer’s instructions. Three independent biological replicates were analyzed for each sample. One microgram of total RNA was retrotranscribed using M-MLV Reverse Transcriptase (M1302 Sigma-Aldrich) to synthesize cDNA for 10 min at 70 °C, 50 min at 37 °C and 10 min at 90 °C according to the technical bulletin. Real-Time PCR was performed with Mastercycler ep real plex Real-Time PCR system (Eppendorf, Hamburg, Germany). The levels of mRNA expression of TLR4, MYD88, RELA, NLRP3, FN1, VIM, VCL, PTK2, ITGA5, ITG1B, and Beta-2 microglobulin (B2M) (endogenous marker) were evaluated in hGFs cells cultured alone, in hGFs cultured with PPSTBs, in hGFs cultured with PPSTBs enriched with GO at 5 μg/mL, in hGFs cultured with PPSTBs enriched with GO at 10 μg/mL, in hGFs stimulated with LPS-E, in hGFs cultured with PPSTBs and stimulated with LPS-E, in hGFs cultured with PPSTBs enriched with GO at 5 μg/mL and stimulated with LPS-E and in hGFs cultured with PPSTBs enriched with GO at 10 μg/mL and stimulated with LPS-E. Commercially available PrimeTime™ Predesigned qPCR Assays TLR4 (Hs.PT.58.38700156.g, Tema Ricerca Srl, Castenaso, Italy); RELA (Hs.PT.58.22880470, Tema Ricerca Srl) MYD88 (Hs.PT.58.40428647.g, Tema Ricerca Srl), NLRP3 (Hs.PT.58.39303321, Tema Ricerca Srl) FN1 (Hs.PT.58.40005963, Tema Ricerca Srl), VIM (Hs.PT.58.38906895; Tema Ricerca Srl), VCL (Hs.PT.58.2753988, Tema Ricerca Srl), PTK2 (Hs.PT.58.524947 Tema Ricerca Srl), ITGA5 (Hs.PT58.4796384 Tema Ricerca Srl), ITGB1 (Hs.PT.58.39883300 Tema Ricerca Srl) and the PrimeTime™ Gene Expression Master Mix (cat.n°1055772, Tema Ricerca Srl) were utilized according to standard protocols ( ). Beta-2 microglobulin (B2M Hs.PT.58v.18759587, Tema Ricerca Srl) was utilized for template normalization. The amplification program included a preincubation step for cDNA denaturation (3 min at 95 °C), followed by 40 cycles consisting of a denaturation step (15 s at 95 °C) and an annealing step (1 min at 60 °C). Expression levels for each gene were performed according to the 2 −ΔΔCt method. Real-Time PCR was performed in three independent experiments. 4.12. Statistical Analysis Statistical significance was established with GraphPad 5 (GraphPad, San Diego, CA, USA) software utilizing one-way ANOVA followed by post hoc Tukey’s multiple comparisons analysis. Values of p < 0.05 were considered statistically significant.
The GO aqueous solution was obtained as a commercial sample from Graphenea (Graphenea, Donostia-San Sebastian, Spain) and already characterized by the manufacturing company in terms of exfoliation (monolayer content > 95%), size (<10 μm), and oxidation degree (Elemental analysis: carbon: 49–56%; oxygen: 41–50%). This characterization is very important because it has been proven that the above-mentioned “biological” properties of GO depend strictly on those features. Due to the good properties of this commercial sample, we decided to use this material and add it as a solid . The commercial aqueous solution of 4 g/L GO was added to Ultrapure MilliQ water (electric resistance > 18.2 MΩ cm −1 ) from a Millipore Corp. model Direct-Q 3 system (Merk, Burlington, Massachusetts, US) in order to reach the concentration of 1 mg/mL, and bath ultrasonicated for 10 min (37 kHz, 180 W; Elmasonic P60H; Elma). The concentration of GO has checked spectrophotometrically at λ max 230 nm by using a Varian Cary 100 BIO UV-Vis spectrophotometer. Dimensions of GO flakes were measured by using dynamic laser light scattering (DLS) (90Plus/BI-MAS ZetaPlus multi-angle particle size analyzer; Brookhaven Instruments Corp., Holtsville, NY, US) in order to check that micrometric GO have been obtained, and ultrasonication did not reduce significantly GO flakes dimensions (see ). In order to further characterize the commercial GO, ζ-potential measurements and Raman spectroscopy (XploRA PLUS, HORIBA, Kyoto, Japan) analyses have been performed (see ). GO dispersion was divided into different aliquots and transferred at −80 °C overnight. After GO aliquots were completely frozen, the samples were placed in a freeze dryer for 48 h, generating black GO sponges. An aliquot of dried GO was redispersed in water and characterized by DLS, ζ-potential (see ), and UV-Vis spectrophotometry. The UV-Vis spectrum was registered in order to observe the dispersion behavior and check the real final concentration of GO after the freeze-drying process ( ). Analogously, Raman spectroscopy analyses were performed on the dry GO sample after lyophilization (see ). The amount of GO necessary for the production of 50 g PPSTBs was 5 mg GO for 5 μg/mL samples and 10 mg GO for 10 μg/mL samples.
PPSTBs of pure PP were enriched with two different concentrations of GO. Briefly, 50 g of PPSTBs were dissolved at the temperature of 160 °C and thoroughly mixed with 5 mg GO for 5 μg/mL samples and 10 mg GO for 10 μg/mL samples, respectively. As the final step, the molten product was placed in molds to create the PPSTBs (produced and furnished by Assut Europe S.p.A).
The PPSTBs substrates were characterized by AFM using the MultiMode 8 AFM microscope (Bruker, Billerica, MA, USA) equipped with a Nanoscope V controller. The Peak Force Quantitative Nanomechanics (PFQNM) mode was used to map the morphology and to acquire quantitative insight into nanomechanical parameters of PPSTBs substrates, such as Young’s elastic modulus. The PPSTBs and PPSTBs-GO 5 μg/mL samples were mapped using a precalibrated RTESPA-300-30 probe (spring constant 38.904 N/m and resonance frequency of 350.251 kHz), while for PPSTBs-GO 10 μg/mL samples, the precalibrated RTESPA-525-30 cantilever (spring constant 266.124 N/m and resonance frequency of 582.946 kHz) was chosen. The deflection sensitivity of both types of cantilevers was measured against a standard Sapphire 12-M sample, and after the calibration, images of 512 × 512 pixels were collected with scan sizes of 5 × 5 μm. To analyze the images, the Nanoscope Analysis 1.8 software was used . The elastic modulus values were calculated by using the Derjaguin–Muller–Toropov (DMT) model, extracting them from each force-distance curve registered at each point of the scanned surface. XRD analysis was performed using the D2 Phaser X-ray diffractometer apparatus (Bruker, Billerica, MA, USA) with Cu Kα radiation (λ = 0.154 nm, 30 kV, 10 mA) as an X-ray source. Scattered X-ray intensities were collected over a range of scattering angle 2 ϴ = 5° to 50° with a scan velocity of 0.05 2θ s −1 .
hGFs (PCS-201-018 ATCC, Manassas, VT, USA) were cultured in Fibroblast Basal Medium (ATCC PCS-201-030) in addition to Fibroblast Growth Kit-Low Serum (ATCC PCS-201-041), containing 5 ng/mL of rh FGF b, 7.5 mM of L-glutamine, 50 μg/mL Ascorbic acid, 1 μg/mL of Hydrocortisone Hemisuccinate, 5 μg/mL of rh Insulin and 2% Fetal Bovine Serum. The culture was maintained in an incubator at 37 °C in a humidified atmosphere with 5% CO 2 and 95% air, and when the cells reached 75–80% confluence, subcultures were produced.
The experimental points shown in the following study design were performed in triplicate with hGFs at passage 5. The cells were stimulated with LPS derived from E. coli O55:B5 (LPS-E) (L6529, Sigma-Aldrich, Milan, Italy) - hGFs used as negative control (CTRL); - hGFs cultured with PPSTBs for 24 h; - hGFs cultured with PPSTBs-GO 5 μg/mL for 24 h; - hGFs cultured with PPSTBs-GO 10 μg/mL for 24 h; - hGFs cultured with 5 μg/mL of LPS-E for 24 h; - hGFs cultured with PPSTBs and 5 μg/mL of LPS-E for 24 h; - hGFs cultured with PPSTBs-GO 5 μg/mL and 5 μg/mL of LPS-E for 24 h; - hGFs cultured with PPSTBs-GO 10 μg/mL and 5 μg/mL of LPS-E for 24 h.
The cell metabolic activity of hGFs, hGFs + PPSTBs, hGFs + PPSTBs-GO 5 μg/mL, and hGFs + PPSTBs-GO 10 μg/mL cultured with or without LPS-E was analyzed through the 3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfo-phenyl)-2H-tetrazolium (MTS) assay (CellTiter 96 ® Aqueous One Solution Cell Proliferation Assay, Promega, Madison, WI, USA). hGFs of each experimental point were seeded at the density of 3.2 × 10 3 cells/well into 96-well plates with Fibroblast Basal Medium (ATCC PCS-201-030) added with Fibroblast Growth Kit-Low Serum (ATCC PCS-201-041) for 24, 48, and 72 h at 37 °C. Then 20 μL/well of MTS dye solution was added to the culture medium, and the plates were incubated for 3 h at 37 °C. The cell viability, defined by formazan salts quantification, was evaluated through absorbance measurements at 490 nm wavelength performed using the Synergy™ HT Multi-detection microplate reader (Biotech, Winooski, VT, USA). The amount of formazan salts detected was directly proportional to the number of live cells in the plate. The MTS assay was executed in three independent experiments .
After 24 h of LPS-E treatment, the morphology of hGFs alone or cultured with PPSTBs, PPSTBs-GO 5 μg/mL, and PPSTBs-GO 10 μg/mL were observed at the inverted light microscope (Leica DMIL, Leica Microsystem) Mag: 10×.
hGFs cells were seeded on PPSTBs, PPSTBs-GO 5 μg/mL, and PPSTBs-GO 10 μg/mL attached to the bottom of a 12-well plate with and without stimulation with LPS-E. After 24 h of culture, cells were fixed for 1 h at 4 °C in 2.5% glutaraldehyde (Electron Microscopy Sciences, EMS, Hatfield, PA, USA), in 0.1 M sodium phosphate buffer (PB), pH 7.3, rinsed three times with PB, and post-fixed for 1 h in 1% aqueous osmium tetroxide (EMS) at 4 °C. The cells were dehydrated through an ethanol series (30%, 50%, 70%, 90%, 95%, and two times 100%) followed by drying in air and carbon. Morphological analysis was carried out using a high-resolution scanning electron microscope (SEM) Regulus 8220 (Hitachi, Ltd., Tokyo, Japan) operated at 1 kV.
The hGFs were cultured in 8-well culture glass slides (Corning, Glendale, AZ, USA) at the density of 1.3 × 10 4 /well. After 24 h of treatment, the cells were fixed 1 h at room temperature with 4% of paraformaldehyde (PFA) (BioOptica, Milan, Italy) in 0.1 M in PBS (Lonza, Basel, Switzerland). After 3 washes in PBS, the cells were permeabilized with 0.1% Triton X-100 (BioOptica) in PBS for 5–6 min and blocked with 5% of non-fat milk in PBS for 1 h at RT. Successively, the primary antibodies were prepared in 2.5% non-fat milk in PBS and maintained overnight at 4 °C. The primary antibody used in this study were all purchased from Santa Cruz Biotechnology (Dallas, TX, USA) and were used, as suggested by their datasheet, at the concentration of 1:200: TLR4 (sc-293072), anti-MyD88 (sc-74532), anti-NFκB p65 (sc-8008), anti-NLRP3 (sc-134306). The secondary antibody Alexa Fluor 568 red fluorescence-conjugated goat anti-mouse (A11031, Invitrogen, Eugene, OR, USA) has been prepared 1:200 in 2.5% non-fat milk in PBS and added 1 h at 37 °C. The cytoskeleton actin and the nuclei have been stained, respectively, with Alexa Fluor 488 phalloidin green fluorescent conjugate (A12379, Invitrogen) and TOPRO (T3605, Invitrogen), both prepared 1:200 in 2.5% non-fat milk in PBS and maintained 1 h at 37 °C. The images were acquired through Zeiss LSM800 confocal system (Carl Zeiss, Jena, Germany) .
The lysates of hGFs (50 μg) underwent electrophoresis and were moved to a polyvinylidenfluoride (PVDF) membrane using a SEMI-dry blotting apparatus (Bio-Rad Laboratories Srl, Milan, Italy). Successively, the membranes were blocked in 5% non-fat milk in PBS 0.1% Tween-20 (Sigma-Aldrich) and then incubated overnight at 4 °C with the following primary antibodies: anti-TLR4 (1:500) (sc-293072, Santa Cruz Biotechnology), anti-MyD88 (1:500) (sc-74532, Santa Cruz Biotechnology), anti-NFκB p65 (1:500) (sc-8008, Santa Cruz Biotechnology), anti-NLRP3 (1:500) (sc-134306, Santa Cruz, Biotechnology), and β-actin as loading control (1:750) (sc-47778, Santa Cruz Biotechnology). After five washings with PBS 0.1% Tween-20, the membranes were incubated for 1 h at room temperature with peroxidase-conjugated secondary antibody goat anti-mouse (A90-116P, Bethyl Laboratories Inc., Montgomery, TX, USA) 1:5000 diluted in 2.5% no-fat milk in PBS and 0.1% Tween-20%. The expression levels of the proteins were detected using the enhanced chemiluminescence exposure process (ECL) (Amersham Pharmacia Biotech, Milan, Italy) with an image documenter Alliance 2.7 (Uvitec, Cambridge, UK). The detected signals were analyzed by ECL enhancement and assessed through UVIband-1D gel analysis (Uvitec). The data obtained were normalized with values assessed by densitometric analysis of the β-actin protein. The Western blotting analysis was executed in three independent experiments .
TLR4, MyD88, NFκB p65, and NLRP3 mRNA expression were analyzed by Real-Time PCR. Total RNA was extracted using PureLink RNA Mini Kit (Ambion, Thermo Fisher Scientific, Milan, Italy) according to the manufacturer’s instructions. Three independent biological replicates were analyzed for each sample. One microgram of total RNA was retrotranscribed using M-MLV Reverse Transcriptase (M1302 Sigma-Aldrich) to synthesize cDNA for 10 min at 70 °C, 50 min at 37 °C and 10 min at 90 °C according to the technical bulletin. Real-Time PCR was performed with Mastercycler ep real plex Real-Time PCR system (Eppendorf, Hamburg, Germany). The levels of mRNA expression of TLR4, MYD88, RELA, NLRP3, FN1, VIM, VCL, PTK2, ITGA5, ITG1B, and Beta-2 microglobulin (B2M) (endogenous marker) were evaluated in hGFs cells cultured alone, in hGFs cultured with PPSTBs, in hGFs cultured with PPSTBs enriched with GO at 5 μg/mL, in hGFs cultured with PPSTBs enriched with GO at 10 μg/mL, in hGFs stimulated with LPS-E, in hGFs cultured with PPSTBs and stimulated with LPS-E, in hGFs cultured with PPSTBs enriched with GO at 5 μg/mL and stimulated with LPS-E and in hGFs cultured with PPSTBs enriched with GO at 10 μg/mL and stimulated with LPS-E. Commercially available PrimeTime™ Predesigned qPCR Assays TLR4 (Hs.PT.58.38700156.g, Tema Ricerca Srl, Castenaso, Italy); RELA (Hs.PT.58.22880470, Tema Ricerca Srl) MYD88 (Hs.PT.58.40428647.g, Tema Ricerca Srl), NLRP3 (Hs.PT.58.39303321, Tema Ricerca Srl) FN1 (Hs.PT.58.40005963, Tema Ricerca Srl), VIM (Hs.PT.58.38906895; Tema Ricerca Srl), VCL (Hs.PT.58.2753988, Tema Ricerca Srl), PTK2 (Hs.PT.58.524947 Tema Ricerca Srl), ITGA5 (Hs.PT58.4796384 Tema Ricerca Srl), ITGB1 (Hs.PT.58.39883300 Tema Ricerca Srl) and the PrimeTime™ Gene Expression Master Mix (cat.n°1055772, Tema Ricerca Srl) were utilized according to standard protocols ( ). Beta-2 microglobulin (B2M Hs.PT.58v.18759587, Tema Ricerca Srl) was utilized for template normalization. The amplification program included a preincubation step for cDNA denaturation (3 min at 95 °C), followed by 40 cycles consisting of a denaturation step (15 s at 95 °C) and an annealing step (1 min at 60 °C). Expression levels for each gene were performed according to the 2 −ΔΔCt method. Real-Time PCR was performed in three independent experiments.
Statistical significance was established with GraphPad 5 (GraphPad, San Diego, CA, USA) software utilizing one-way ANOVA followed by post hoc Tukey’s multiple comparisons analysis. Values of p < 0.05 were considered statistically significant.
The current work aimed to investigate the possible therapeutic benefit of commercial PP suture threads enriched with GO in a gingival fibroblasts cellular model. Our results showed that GO-fabricated PP suture threads modulated the inflammatory effects induced by LPS-E through TLR4/MyD88/NFκB p65/NLRP3 pathway. The biological effects of suture thread enriched with GO may represent a promising strategy that can be applied in clinical medicine.
|
A Comprehensive Review on Pharmacologically Active Phyto-Constituents from
|
4e1e3dbb-a363-499f-ab8a-96b775d6af78
|
10096824
|
Pharmacology[mh]
|
The Himalayas Mountain range is an abundant source of medicinally active plants, herbs, shrubs etc. There are many naturally growing plant species. One of the plant genera found in Himalaya region is Hedychium. We consider Hedychium species in this study. Hedychium species (spiked ginger lily) are annual–perennial, rhizomatous, erect flowering plants. They are abundant with aromatic and medicinally active compounds. The herb belongs to the family Zingiberaceae . About 100 species of Hedychium are found worldwide , and the most-explored species are Hedychium spicatum , Hedychium coronarium , Hedychium coccineum , Hedychium flavescens , Hedychium gardnerianum etc. They are situated in the subtropical area of Himalayan region, India, and widely found in China, Myanmar, Nepal, and Thailand, Madagascar (Africa), hot tropical regions of Asia, Indo-China, Malasia, Indonesia etc. . Traditional applications and previous investigations carried out by many researchers on Hedychium species have suggested various medicinal applications. These attracted us to consider Hedychium species for our research. The plant’s whole body, essential oil and solvent extracts from rhizomes are the main sources of biological activity. Compositions of essential oil obtained from dried rhizomes with good anti-inflammatory activity , and extracts obtained from different parts (i.e., leaves, rhizomes, flowers of plant (as shown in )) that contain numerous phytoconstituents (such as Hedychenone, Coronarin-D, Hedychilactone-D, Hedychinal and many more) that exhibit anti-inflammatory activity and other biological activity (such as anti-histaminic, hair growth, skin care, and cytotoxic (in breast cancer) effects etc.) have also been reviewed several times. Therefore, herein we reviewed and discussed the current updated information found in our search of traditional applications and knowledge of the plant, emphasizing and aiming to identify its pharmacological applications for human use. For that purpose, details on ethnopharmacology, phytochemistry, extraction, purification, isolation, identification, characterization, and the pharmacological properties of the chemical constituents obtained from Hedychium species are under consideration. An exhaustive literature search was accomplished by using different online search engines such as PubMed, PubChem, Sci-finder, ChemSpider, Science Direct, Mendeley, Scopus, Google, Google Scholar, FPO (Free Patent Online), Espacenet -patent search, Research Gate, electronic data bases and publishers’ websites such as Taylor Francis, Wiley, and ACS publications (American Chemical Society). The general keywords Hedychium spicatum , Hedychium species , Hedychenone, and labdane di-terpenes were used for the article search. The metadata were compiled, and all information retrieved from electronic data through the online literature search materialized in different sections, per its availability and the necessity of accomplishing an objective systemic review. 2.1. Phyto-Constituents Many chemical constituents have been identified in Hedychium species . These are diterpenes, flavonoids, sesquiterpenes and the essential oil of flowers and rhizomes, which contain aromatic compounds (listed in ); these chemical structures and constituents are shown in A and B, respectively. 2.1.1. Terpenes Furanoid Di-terpene: A 50% extract of rhizomes of Hedychium spicatum contains Furanoid diterpene used in the treatment of pain, inflammation, and stomach ailments. Purification by column chromatography with silica gel and benzene yielded Hedychenone (MP 135–136 °C, [α] D +142° in CHCl 3 , λ max 239 nm). Confirmation tests (the Liebermann–Burchard and Ehrlich tests) yielded an orange color with Acetic acid and H 2 SO 4 . shows the structures of Furanoid diterpenoids such as Hedychenone{4-[( E )-2-(furan-3-yl)ethenyl]-3,4a,8,8-tetramethyl-4a,5,6,7,8,8a-hexahydronaphthalen-1(4 H )-one} . Hydrogenation of the Hedychenone side chain at position Δ 11 with Pd/C yielded 11,12-dihydrohedychenone{4-[2-(furan-3-yl) ethyl]-3,4a,8,8-tetramethyl-4a,5,6,7,8,8a-hexahydronaphthalen-1(4 H )-one} (λ max 220, 240 nm due to Furan and Enone chromophores), Ozonolysis of Hedychenone yielded β-furaldehyde (2,4-dinitrophenylhydrazone) MP. 147 °C. Reduction of Hedychenone and 11,12-dihydrohedychenone with LAH saturation of Δ 7 double bond Hedychanone (λ max 216, [α] D +62°) and 11,12-dihydrohedychanone (λ max 216, [α] D +26°) yielded , 7-hydroxyhedychenone (13-beta-furanolabda-6-keto-7,11-dien-7-ol), MP-108–109 °C, [α] D +125°, λ max 215, 230, and 278 nm. Acetylation of 7-Hydroxyhedychenone gives its mono acetate, and hydrogenation with Pd/C yield dihydro-7-hydroxyhedychenone [α] D +0.7°, by reduction with LAH yielded 7-hydroxyhedychanone; acetylation of 7-hydroxyhedychanone obtained its acetate . The 9-hydroxyhedychenone and 7-Acetoxy Hedychenone reaction scheme is shown in . Labdane diterpene: Cytotoxic compounds Coronarin A, Coronarin B, Coronarin C, and Coronarin D were isolated from Hedychium coronarium, also used in rheumatism in Brazil. Hedychium coronarium also contains Coronarin E and Coronarin F, isolated and purified by silica gel chromatography of chloroform extract. Coronarin E (C 20 H 28 O, colourless, [α] D +22.3° was confirmed with IR and 1 H NMR, 13 CNMR data, λ max 234 nm, e = 9100, m / z 137); Coronarin F (C 30 H 46 O 3 , colourless needles, M.P.157–159 °C, [α] D +90.0 °C containing exo-methylene was confirmed in IR bands at 3080, 1640, 890 cm −1 , and its compound was confirmed by 1 H NMR and 13 C NMR) . Methanol extract of Hedychium coronarium was purified with liquid–liquid extraction with ethyl acetate and water, and both reversed-phase and ordinary-phase column chromatography were performed on an ethyl-acetate fraction using the silica gel method. Hedychilactone A (C 20 H 30 O 3 ) was isolated as a colourless liquid, λ max at 227 nm, log ε 4.08, [α] D +12.3°; Hedychilactone B (C 20 H 30 O 3 ) was isolated as colourless liquid, [λ] D +10.6°, and an IR spectrum showed an absorption band at 3496, 1750, and 1674 cm −1 . Hedychilactone C (C 20 H 30 O 4 ) was isolated as colourless liquid, with λmax at 222 nm, log ε 3.92 and [λ] D +23.8 °C . Farnesane-type sesquiterpenes Hedychium coronarium (cultivated in Japan) contain Heychiols A, Hedychiols B 8,9-diacetate, and Farnesane-type sesquiterpenes. Methanol extract of Hedychium coronarium was purified with liquid–liquid extraction with ethyl acetate and water, and both reversed-phase and ordinary-phase column chromatography were performed on an ethyl-acetate fraction using the silica gel method. Hedychiol A (C 15 H 26 O 2 ) Hedychiol B 8,9-iacetate (C 19 H 30 O 5 ) were isolated as colourless oil and [λ] D −18.8° . 2.1.2. Flavonoids Leaves of Hedychium coccineum and Hedychium coronarium contain flavanols myricetin and quercetin. Glycoside syringetin 3-rhamnoside has been identified in Hedychium stenopetalum . A flavonoid aglycone moiety was identified after acid hydrolysis of 80% methanolic leaf extract. The hydrolyzed product was identified by TLC, using a standard marker solution based on the R f value under visualization in a UV chamber . Dichloromethane and methanol (1:1) solvent was used to extract the Hedychium spicatum rhizome. Chrysin was isolated in an ethyl acetate/ether/hexane (25:14:61) fraction of silica gel (100–200 mesh) column chromatography . Chloroform extract of the Hedychium spicatum rhizome was chromatographed over silica gel (60–120 mesh), and Teptochrysin containing fraction F2 was further purified by column chromatography over silica gel (100–200 mesh) using Methanol:Chloroform (7:93) solvent, and was characterized by IR, MS, 1D and 2D NMR . 2.1.3. Glycoside Syringetin-3-rhamnoside was identified in Hedychium stenopetalum . Hedychium coronarium flowers’ extraction in 80% aqueous acetone and chloroform yielded Coronalactosides I, obtained as a white powder . 2.1.4. Xanthone Hedychium gardnerianum rosc. rhizome was extracted with successive extraction methods using hexane and acetone. The extract was purified over silica gel column chromatography using a chloroform and methanol gradient mixture, which was further purified using silica gel prep TLC and further crystallization in methanol, yielding 3-(2-Hydroxyethoxy) xenthone, 1-Hydroxyxanthone, Oplopanone. and Salicylic acid (2-Hydroxybenzoic acid), with 1-Hydroxyxanthone having MP 143–145 °C and UV λmax (MeOH, nm) 230, 251, 297, 361 . 2.1.5. Saponins Water and alcoholic extract passed a foaming test for Saponins, and a physiochemical test of Hedychium spicatum , a rhizome powdered drug with individual compounds not yet identified. 2.2. Essential Oil The physiochemical properties of essential oil obtained from Hedychium spicatum are given in . The oil contains alpha-pinene, beta-pinene, Limonene, 1:8 Cineole, Linalool found in major amounts, and Camphore, Linalyl acetate, Terpineol, Borneol, Caryophyllene, r-Cadinene, humulene, Terpineolene and P-Cymene in low quantities. These compounds were studied by TLC and GLC methods . The chemical composition of the essential oil of Hedychium spicatum rhizomes by gas chromatography shows that the essential oil contains Caryophyllene, monoterpenes, sesquiterpenes, and sesquiterpene alcohol . The essential oil was isolated from chopped rhizome with steam distillation, and the distillate was further saturated with NaCl, then further extracted with petroleum ether (60–80 °C) and hexane. Isolation of compounds from the essential oil at a boiling point of 60–80 °C yielded a (0.3%) compound using fractional distillation, with a spinning band distillation assembly based on the boiling point. 1,8-cineol (boiling point 62 °C), -linalool (boiling point 120 °C), five alcohol, -Elemol (26%), (−)-epi-10-Gama-eudesmol (19%), and (30%) of (−)-alpha-Cadinol, alpha-Eudesmol, -beta-Eudesmol mixture were isolated. Bottini et al. reported the essential oil constituents isolated from different Hedychium species , which are listed in and . 2.3. Synthesis of Labdane Diterpenes 2.3.1. Synthesis of Hedychenone, Yunnacoronarin A from Larixol Larixol is a Labdane isolated from Larch oleoresin, converted into Aldehyde using a 3-step transformation using osmium tetroxide/sodium periodate oxidation. Free alcohol is protected by silyl ether followed by elimination of tertiary acetate using 2,4,6-collidine, which is identified by the presence of vinylic protons. 3-furyl lithium is used for addition reaction to the aldehyde group and converted into a mixture of Epimeric alcohols. The Epimeric alcohols are mesylated in the presence of 2,6-lutidine, which produces an elimination product with trans configuration. Cleavage of silyl ether followed by oxidation produces an intermediate which produces Hedychenone on isomerization, and upon reduction with di-isobutyl, aluminum hydride yields Yunnacoronarin A. The reaction scheme is shown in . 2.3.2. Synthesis of Yunnacoronarin D from Hedychenone Yunnacoronarin D is synthesized from Hedychenone by oxidation of the allylic methyl group and reduction of the aldehyde group. Photo-oxidation of Hedychenone also produces Yunnacoronarin ; the reaction scheme is shown in . 2.3.3. Synthesis of Hedychenone and Hedychilactone-B from a Hindered Diene System A diene system undergoes [4+2] cycloaddition with allene carboxylate, which produces an intermediate. The [2+2] cycloadduct of this intermediate yields a cyclo-butane intermediate. The cyclo-butane intermediate provides a mixture of Diels-alder adducts; they are Exo and Endo isomers. The Exo isomer is asymmetrically converted to Hedychenone by reduction, oxidation, olefination, and de-sialylation in the presence of 3-furyl ylide . The Ester cycloadduct intermediate is reduced to give intermediate aldehyde, which reacts with tri-phenylphosphoranylidene lactone in methane dichloride to yield Hedychilactone B ; the reaction scheme is shown in . 2.3.4. Synthesis of Derivative of Hedychenone Hedychenone converted into 6,7-dihydro Hedychenone in the presence of 10% Pd/C in ethanol yielded 6,7-Dihydro Hedychenone. 6,7-Dihydro Hedychenone further reduced with LiAlH 4 in THF at 0° C yielded 6,7,11,12-tetra Hydro Hedychenone. Reduction of Hedychenone with aluminium mercury alloy yielded a dimerization product. Ozonolysis of Hedychenone with O 3 in DCM at −10 °C produced an aldehyde product ; the reaction scheme is shown in . Hedychenone undergoes epoxidation with m-CPBA in DCM at room temperature. The synthesised compound was further characterized by NMR, Mass and FTIR. The SAR indicates that the Furanoid ring system has good cytotoxicity, rather than the Decalone nucleus. Dimerization through C-8 was found to significantly enhance the cytotoxic activity of Hedychenone. 2.3.5. Other Reactions Enzymatic hydrolysis of Coronalactosides I with Naringinase Coronalactoside-I in 0.1 M acetate buffer (pH 3.8, 1.0 mL) was treated with Naringinase solution at 40 °C for 24 h, thereby forming Coronalactone. Workup and chromatography was performed on the mixture over reverse-phase silica gel. Coronalactone was found to be a colourless oil, and [α] D -12.9°. The reaction scheme is shown in . Labdane diterpenes Hedychenone was converted to Yunnacoronarin A, Yunnacoronarin D, Hedychilactone B, 6,7-Dihydrohedychenone, 6,7,11,12-Tetra hydro Hedychenone, Aldehyde product of Hedychenone, and Dimer of Hedychenone were found to be more cytotoxic than Hedychenone. Coronalactone had an active moiety of Coronalactoside, and hepatoprotective activity. 2.4. Herbal and Traditional Uses of Hedychium species Various Hedychium species are used to treat ailments and diseases in traditional herbal medication. Extracts, decoctions, infusions, macerates, oils and squeezed liquid forms are used for different administrations, as listed in . 2.5. Pharmacological Activity of Hedychium species Hedychium species possess various pharmacodynamic activities based on different activity that has been carried out by researchers. contains the pharmacodynamic activities of the extract, and the chemical constituents present in extract. 2.5.1. Anti-Inflammatory and Analgesic Activity In a personal communication report by Dhawan, B.N, (Pharmacology division, CDRI, Lucknow) it was reported that the ethanolic extract of the Hedychium spicatum plant rhizome has anti-inflammatory properties . In Vitro Anti-Inflammatory and Analgesic Effect Shrotriya et al. evaluated Hedychium coronarium and successive rhizome extracts of Hexane, Chloroform, and Methanol for analgesic activity. Acetic Acid-Induced Writhing Test for Analgesic Effect This test found that chloroform and methanolic extract of 400 mg/kg bw both inhibited writhing reflux (27.23% by chloroform and 40.59% by methanolic extract) in eight groups of pre-screened Swiss albino mice with significant p < 0.001 and 50 mg/body weight of aminopyrine as control. Inhibition was measured using formula : (1) % Inhibition of writhing = 1 − W t W c × 100 Radiant Heat Tail-Flick Method for Analgesic Activity Tail-flick latency was assessed using morphine 2 mg/kg body weight as control and an analgesiometer. The percentage of elongation measures showed significant response for radiant heat, with p < 0.001 . Carrageenan-Induced Rat Hind Paw Edema Animal Model for Anti-Inflammatory Study This model was used to estimate acute inflammation. Different concentrations of hexane, chloroform, methanol extract were tested against PBZ (Phenylbutazone), with an 80 mg/kg dose and percentage inhibition calculated using the following formula: (2) % Inhibition of paw edema = 1 − V t V c × 100 where V c and V t represent paw volume. The control study found that chloroform and the methanolic extract of Hedychium coronarium rhizome extract exert significant p < 0.01 inhibition of paw volume, at 27.46% and 32.39%, respectively, with 400 mg/kg body weight; meanwhile, the control, with 80 mg/kg, showed a 42.54% inhibition with a significance value of p < 0.001 . Nitric Oxide Inhibitory Effect In Vitro Inhibitory Assay Oof NO Produced in LPS and IFN-g-Stimulated 264.7 Macrophages Labdane diterpenes, Hedychenoids A, Hedychenoids B, Hedychenone, Forrestin A, and Villosin were isolated from the rhizomes of Hedychium yunnanense by ethanol maceration, and further purified by liquid–liquid extraction using ethyl acetate and water, then butanol. Hedychenoids B and Villosin had an inhibitory effect, with IC 50 values of 6.57 ± 0.88 and 5.99 ± 01.20 µg/mL respectively . Inhibition of NO Production and iNOS Induction in LPS-Activated Mouse Peritoneal Macrophages NO is a free radical produced by oxidation of L-arginine by NO synthase (NOS). NO is involved in various processes, e.g., vasodilation, nonspecific host definition, ischemic reperfusion injury, and chronic and acute inflammation which respond to pro-inflammatory agents such as interleukin-1 β, tumour necrosis factor-α, and LPS in macrophages, endothelial cells, and smooth muscle cells. Hedychilactone A, Hedychilactone B, Hedychilctone C, Coronarin D, Coronarin D methyl ether, Coronarine E, Labda-8(17), 13(14)-dien-15,16-olide, Hedychenone, 7-Hydroxihedychenone, -Nerolidol, Hedychiol A, Hedychiol B 8,9-diacetate, and LNMMA were tested for inhibitory effects, and the study found that a concentration of 10 µM–100 µM caused significant inhibition, with p < 0.05 to p < 0.01 . shows the mechanism of NO production. Inhibition of Acetic Acid-Induced Vascular Permeability in Mice (Anti-Inflammatory) Histamine and serotonin play an important role in the vascular permeability induced by acetic acid, an exudative state of inflammation. The anti-inflammatory effects of methanolic extract of Hedychium coronarium and some labdane diterpenes (Coronarin D, Coronarin D methyl ether) were tested, and the study found that methanol extract (Dose 250–500 mg/kg), Coronarin D (Dose 25–50 mg/kg), and Coronarin D methyl ether (Dose 25–50 mg/kg) had a significance of p < 0.05– p < 0.01 . Inhibition of Released Beta-Hexosaminidase from RBL-2H3 Cells In Vitro (Antiallergic) The 12 compounds of Hedychiol A, Hedychiol B 8,9-diacetate, Hedychilactone A, B, C, Coronarin D, Coronarin D methyl ether, Coronarin E, Labda-8(17), 13(14)-dien-15,16-olide, Hedychenone, 7-hydroxyhedychenone, and -Nerolidol from Hedychium coronarium were examined for an antiallergic reaction by testing their inhibitory effect on the release of beta-Hexosaminidase from RBL-2h3 cells. The study found that the compounds have inhibitory concentrations from 10 µM to 100 µM, which were significantly different from the control, with p < 0.05 to p < 0.01 . Coronarin G, Coronarin H, Coronarin I, Coronarin D, Coronarin D methyl ether, Hedyforrestin C, (E)-nerolidol, b-sitosterol, daucosterol, and stigmasterol isolated from Hedychium coronarium were evaluated. The inhibitory effect of the compounds was tested on Lipopolysaccharide-stimulated production of pro-inflammatory cytokines in bone marrow-derived dendritic cells. The compounds Coronarin G, Coronarin H, and Hedyforrestin C were significant inhibitors of LPS-stimulated TNF-α, IL-6, and IL-12 p40 production, with IC 50 ranging from 0.19 ± 0.11 to 10.38 ± 2.34 µM, and the other compounds are cytotoxic . shows stimulation of TNF-α. Hedycoronen A, Hedycoronen B, labda-8(17),11,13-trien-16,15-olide, 16-hydroxyl-abda-8(17),11,13-trien-15,16-olide, Coronarin A, and Coronarin E were isolated from Hedychium coronarium rhizome extract with methanol, and further purification was carried out through successive liquid–liquid extractions with water, chloroform, and water and ethyl acetate, followed by chromatography over silica gel. Hedycoronen A and Hedycoronen B were found to have a potent inhibitory effect on LPS-stimulated interleukin-6 (IL-6) and IL-12 p40, with IC 50 ranging from 4.1 ± 0.2 to 9.1 ± 0.3 μM. Hedycoronen A and Hedycoronen B were found to have moderate inhibitory activity on tumor necrosis factor-α (TNF-α) production, with IC 50 values of 46.0 ± 1.3 and 12.7 ± 0.3 μM . Hedychicoronarin, Peroxycoronarin D, 7β-hydroxycalcaratarin A, (E)-7β-hydroxy-6-oxo-labda-8(17),12-diene-15,16-dial, Calcaratarin A, Coronarin A, Coronarin D, Coronarin D methyl ether, Coronarin D ethyl ether, (E)-labda-8(17),12-diene-15,16-dial, ergosta-4,6,8(14),22-tetraen-3-one, a mixture of β-sitostenone and β-stigmasta-4,22-dien-3-one, 6β-hydroxystigmast-4-en-3-one, 6β-hydroxystigmasta-4,22-dien-3-one, and a mixture of stearic acid and palmitic acid were isolated from Hedychium coronarium . The inhibitory effect of the compounds was tested on superoxide radical anion generation. Elastase release by human neutrophils was evaluated in response to fMet-Leu-Phe/cytochalasin B. Compounds 7β-hydroxycalcaratarin A and (E)-7β-hydroxy-6-oxo-labda-8(17),12-diene-15,16-dial, calcaratarin A, (E)-labda-8(17),12-diene-15,16-dial, and ergosta-4,6,8(14),22-tetraen-3-one had an inhibitory concentration of IC 50 < 6.17 µg/mL . 2.5.2. Antioxidant Activity Chan et al., 2008 carried out estimation of total phenolic content and free radical-scavenging activities using a Folin–Ciocalteu, and DPPH radical scavenging assay. The methanolic extract of leaves of Hedychium coronarium (from the Lake Gardens of Kuala Lumpur, Malaysia) were used for the study. The methanolic extract obtained from rhizomes of Hedychium spicatum was found to have potent antioxidant activity . The total phenolic content was estimated using the gallic acid calibration equation y = 0.0111 × 0.0148 (R 2 = 0.9998), and the total phenolic content of Hedychium coronarium was 820 ± 55 mg GAE/100 g, with significance of p < 0.05. The DPPH radical scavenging activity was calculated as IC 50 and expressed as the ascorbic acid equivalent antioxidant capacity (AEAC), at IC 50 = 0.00387 mg/mL, whereas Hedychium coronarium had a capacity of 814 ± 116 mg AA/100 g, with significance of p < 0.05. AEAC (mg AA/100 g) = IC 50(ascorbate) /IC 50(sample) ·10 5 . Essential oil from a Hedychium gardnerianum Sheppard ex Ker-Gawl leaf was evaluated for DPPH antioxidation activity, and good antioxidant activity was found against ascorbic acid and BHT, as standard . 2.5.3. Anti-Microbial Activity and Anti-Fungal Activity Anti-Microbial Activity and Anti-Fungal Activity by Disk Diffusion Method Aqueous, methanol, ethanol, acetone, and hexane extracts of Hedychium spicatum rhizome were evaluated against B. subtilis , S. aureus , M. luteus , E. coli , A. flavus , A. fumigatus , M. gypseum , and C. albicans, and the zone of inhibition was recorded in (mm) at different doses (mg/disc). The results found that the extracts were active against B. subtilis , M. luteus , E. coli , A. flavus , and C. albicans, and not active against S. aureus , A. fumigatus , and M. gypseum . The essential oil of Hedychium coronarium was obtained through hydro distillation, and the composition of the essential oil was identified by gas chromatography combined with mass spectroscopy with a flame ionization detector; the major components were B-pinene, eucalyptol, linalool, Coronarin-E, etc. The essential oil exhibited DPPH radical-scavenging activities, and also inhibited C. albicans and F. oxysporum . Bisht, G.S et al., 2006 carried out an anti-microbial study for petroleum ether, benzene, chloroform, ethyl acetate, acetone, ethanol, aqueous extract, and essential oil from Hedychium spicatum rhizome. Dimethyl sulfoxide (DMSO) was used as a dilunt for the extract, and Tween-20 was used as the diluent for essential oil. The following bacterial strains: Bacillus cereus G , Staphylococcus aureus (KI-1A) G , Staphylococcus aureus G , Alcaligenes faecalis G (−), Escherichia coli G (−), Escherichia coli (MTCC 1687) G (−), Klebsiella pneumoneae G (−), Pseudomonas aeruginosa (MTCC 424) G (−), Salmonella typhi G (−), Shigella dysenterae G (−) and fungal strain i.e., Alternaria saloni , Aspergillus fummigatus , Aspergilus flavus , Aspergillus niger , Candida albicans (MTCC 227), Fusarium oxysporum , Mucarracemosus , Peniciliummonotricales , Penicilium spp., Rhizopus stolonifer , Trichoderma viride , and Trichodermalignorum, were used. The study found that the concentrations of anti-microbial and anti-fungal compounds were 20 mg/disc for extract and 500 µL/disc for essential oil, respectively. Gentamycin (10 µg/disc), penicillin (10 unit/disc), Vancomycin (30 µg/disc), and Methicillin (5 µg/disc) were used as standard anti-microbial agents, and Cyclohexamide (30 µg/disc) was used as a fungicide, using a Petri dish diffusion/agar diffusion test . S. Joshi et al. carried out an antimicrobial assay using a Petri dish diffusion method, testing the rhizome essential oil (50 µL/disc) of Hedychium ellipticum , Hedychium aurantiacum , hedychium coronarium , and Hedychium spicatum and control Amikacin, Ciprofloxacin, Ampicillin, Gentamycin, and Tetracycline on bacterial strains S. aureus , Sal. Enterica , Pasteurella multocida , Shigella flexneri and Escherichia coli . The minimum inhibitory concentration of essential oil was found to range from 0.97 to 62.5 µL/mL, depending on the susceptibility of the tested organism . In Vitro Antimicrobial Activity Using Agar Diffusion/Disk Diffusion Method A Mueller–Hinton agar plate was cultured with microbial broth culture to study the zone of inhibition, using Hedychium coronarium rhizome essential oil (Contains Mono-terpenes, Di-terpenes, and sesqui-terpenes), by cylinder plate method. The plates were incubated for 24 h at 37 °C for bacteria and 24–48 h at 28 °C for fungi. The bacteria Bacillus subtilis and Pseudomonas aeruginosa and the yeast-like fungus Candida albicans and Trichoderma sp. (Dermatophyte) were studied. The study found that the essential oil of rhizomes has a better inhibitory effect against Trichoderma sp. and Candida albicans than against Bacillus subtilis and Pseudomonas aeruginosa . 2.5.4. Anthelmintic Property Aqueous, hydro-ethanolic, hydro-methanolic, and methanol extract were evaluated for activity; the study found that methanolic extract of Hedychium spicatum was as effective as Thiabendazole at 2%, 4%, and 6% concentrations . Caenorhabditis Elegans Mobility Test for Anthelmintic Activity Lima A et al., 2021 carried out anthelmintic activity using adult C. elegans , both susceptible (wild-type and Bristol N2), and Ivermectin-resistant; a balance saline solution was used in 24-well plates for treatments with nematodes. The concentration of Hedychuim coronarium rhizome essential oil and standard monoterpenes, i.e., α-Pinene, β-Pinene, (S)-(−) Limonene, (R)- Limonene 1,8-Cineole, and p -Cymene, was found to range from 0.009 to 10 mg/mL, diluted in DMSO 1%. After 24 h at 24° C, mortality was evaluated with a negative control mixture of M-9 solution and Ivermectin as positive control. The study found that the essential oil of Hedychium coronarium rhizomes had an IC 50 concentration of 0.082 mg/mL (IC 95 concentration 0.058–0.117 mg/mL) for the Bristol N2 strain, and an inhibitory concentration of IC 50 of 0.82 mg/mL (IC 95 0.556–1.2 mg/mL) for the Ivermectin-resistant strain, with a significance value of p < 0.05 . 2.5.5. Anti-Histaminic, Mast Cell-Stabilizing and Bronchodilator Effect An in vitro study of hydroalcoholic extract of root composition containing Hedychium spicatum root was carried out to examine its antihistaminic effect (with Histamine dihydrochloride as control) on Guinea pigs, with a composition of 50 mg/kg of Hydroalcoholic extract. It effectively works as a preventive-type antagonist. When investigating mast cell stabilization (Ketotifen fumarate as control) on rats, a 1000 µg/mL concentration composition produced 54–58% inhibition of mast cell degranulation, with a significance value of p < 0.001. The bronchodilatory effect of the extract composition on histamine-induced bronchospasm (80–86%) was investigated in guinea pigs; a 200 mg/kg–500 mg/kg composition increased pre-convulsion time by 27–36%, with a significance value of p < 0.001 . 2.5.6. Cytotoxic Activity Sesquiterpenes isolated from Hedychium spicatum (Eudesma-4(15)-ene-β-11diol, crytomeridiol, β-Udesmol, 3-Hydroxy-β-eudesmol, Mucrolidin, Oplapanone, α-Terpineol, Elemol, Dehydrocarissone, Δ7-β-Eudesmol, Opladiol, Hydroxycryptomeridiol, β-Caryophyllene oxide, Coniferaldehyde and Ethylferulate) were examined through inhibitory their effects against A-549, B-16, Hela, HT-29, NCI-H460, PC-3, IEC-6 and L-6 cancer cell lines. The results found that the compounds had potent cytotoxic activity, with an IC 50 value of 0.3 μg/mL and 1.80 μg/mL . Cytotoxic Screening Test A chloroform extract of rhizome of Hedychium coronarium eluted over silica gel column chromatography and seven fractions were evaluated for cytotoxicity, which was tested by a total cell packed volume method using Sarcoma 180 ascites in mice. (E)-λ-8(17),12-di Coronarin A (IC 50 = 1.65), Ene-15,16-dial (IC 50 = 18.5), Coronarin B (IC 50 = 2.70), Coronarin C (IC 50 = 17.5), and Coronarin D (IC 50 = 17.0), were also tested by an inhibition of colony formation method using Chinese hamsters. V-79 cells’ cytotoxicity was determined by T(the number of stained colonies of test groups)/C(the dose of the control group × 100 values, or the IC 50 drug concentration that inhibits colony growth by 50%) . In Vitro Cytotoxicity Assay Labdane diterpenes, Hedychenoids A, Hedychenoids B, Hedychenone, Forrestin A, and Villosin were isolated from the rhizomes of Hedychium yunnanense by ethanol maceration, and further purified by liquid–liquid extraction using Ethyl acetate and water, then Butanol. The compounds were tested using an SRB method on SGC-7901 (the human gastric cancer cell line) and HELA (human cervical carcinoma). The study found that Hedychenoids B, Hedychenone, and Villosin had cytotoxicity against SGC-7901, with IC 50 values of 14.88 ± 0.52, 7.08 ± 0.21 and 7.76 ± 0.21 µg/mL, and against HELA, with IC 50 values of 9.76 ± 0.48 and 13.24 ± 0.63 µg/mL, respectively . In Vitro Cytotoxic Study of Hedychenone and Its Analogues MCF-7 (breast cancer), HL-60 (human promyelocytic leukemia), CHO (Chinese hamster ovary), A-375 (Human malignant melanoma), and A-549 (Human lung carcinoma) cell lines were studied . Hedyforrestin D, 15-Ethoxy-hedyforrestin D, Yunnacoronarin A, Yunnacoronarin B and Yunnacoronarin C were tested for cytotoxicity against the lung adenocarcinoma cell A549, and leukemia cells K562 through an MTT assay. The study found that Yunnacoronarin A and Yunnacoronarin B have good activity, with IC 50 values of 0.92 and 2.2 µM. The unsaturated lactone group had an important role in the anti-tumor activity against human lung adenocarcinoma . Hexane extract of Hedychium coronarium -derived compounds 6-oxo-7,11,13-labdatrien-17-al-16,15-olide, 7,17-dihy-droxy-6-oxo-7,11,13-labdatrien-16,15-olide, Coronarin D , 7 Coronarin C, 7 Coronarin D methyl ether,15-Cryptomeridiol,16-Hedychenone,13, 6-oxo-7,11,13-labdatri-ene-16,15-Olide,12-pacovatinin A,17,4-Hydroxy-3-methoxy cinnamaldehyde, 18, and 4-Hydroxy-3-methoxy ethyl cinnamate were tested against A-549 (lung cancer), SK-N-SH (human neuroblastoma), MCF-7 (breast cancer) and HELA (cervical cancer) cell lines, showing moderate cytotoxic activity , and antineoplastic activity against brain cancer. The antiproliferative activity of Coronarin D against Glioblastoma cell line U–251 was reported . 2.5.7. Ameliorating Potential The protective effect of Hedychium spicatum rhizome powder with concentrations of 4000 ppm and 2000 ppm was tested against 250 ppm IC 50 of Indoxacarb-induced toxicity on a group of cockerels. The ameliorative effect of Hedychium spicatum root powder and its ability to restore the gene activities and expression of antioxidant, biotransformation, and immune system genes were demonstrated in cockerels fed Indoxacarb . 2.5.8. Hepatoprotective Effect Hepatoprotective Effect on D-GalN-Induced Cytotoxicity in Primary Cultured Mouse Hepatocytes S. Nakamura et al. carried out a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) colorimetric assay in primary cultured mouse hepatocytes. Hepatocytes were isolated by the collagen perfusion method. Some 80% Aqueous extract, Coronarins B, Coronarins C, Coronarins D, 15-Hydroxylabda-8(17), 11,13-trien-16,15-olide, 16-Formyllabda-8(17),12-dien-15,11-olide, Ferullic acid, and Silybin were tested. During the test, Formazone was produced. The optical density of Formazone solution at 562 nm (reference: 660 nm) was measured by microplate reader. The percentage inhibition was calculated by the following formula: [OD sample−OD control/OD normal−OD control] × 100. The study concluded that 80% Aqueous acetone extract of Hedychium coronarium flower and other chemical constituents had a hepatoprotective effect, with a significant p < 0.01 value of percentage inhibition . The expression of hepatic genes associated with biotransformation, antioxidant, and immune systems in WLH cockerels fed indoxacarb was evaluated, and a protective effect of Hedychium spicatum root extract was found. The extract prevents changes in expression of antioxidant, biotransformation, and immune system genes . shows inhibition of D-GalN-induced cytotoxicity. Compounds showing hepatoprotective effect through inhibition of D-GalN-induced cytotoxicity: Coronarins B, Coronarins C, Coronarins D, 15-Hydroxylabda-8(17), 11,13-trien-16,15-olide, 16-Formyllabda-8(17),12-dien-15,11-olide, and Ferullic acid. Compounds showing anti-inflammatory effect through inhibition of lPs-induced NO production: Hedychenoids A, Hedychenoids B, Hedychenone, Forrestin A, Villosin, Hedychilactone A, Hedychilactone B, Hedychilctone C, Coronarin D, Coronarin D methyl ether, Coronarine E, Labda-8(17), 13(14)-dien-15,16-olide, Hedychenone, 7-Hydroxihedychenone, -Nerolidol, Hedychiol A, and Hedychiol B 8,9-diacetate. Compounds showing anti-allergic effect through inhibition of TNF-α-induced cytotoxicity: Coronarin G, Coronarin H, Coronarin I, Coronarin D, Coronarin D methyl ether, Hedyforrestin C, (E)-Nerolidol, β-Sitosterol, Daucosterol, and Stigmasterol. 2.5.9. In Vitro Pediculicidal Activity V. Jadhav et al. carried out an in vitro pediculicidal activity. The hydro distillate essential oil of Hedychium spicatum rhizomes has been tested on P.humanuscapitis (Phthiraptera: Pediculidae). Essential oil of 1, 2 and 5% concentration was blended with coconut oil as a base. A 1% permethrin-based preparation was used as a positive control. Lice clasping hair strands were immersed completely in the test solutions and the marketed preparation for 1 min. Vital signs were measured after 5, 10, 15, 20, 30, 45, 60, 90, and 120-min; lice were judged to be dead if a vital sign was zero. Mortality was observed as 85%, 80%, and 75% for the 5%, 2%, and 1% Hedychium spicatum essential oil; after 2 h, 100% mortality was observed, which is as significant as 1% permethrin preparation . 2.5.10. Hair-Growth Promotion Activity Pentadecane and Ethyl para methoxy cinnamate were isolated from Hedychium spicatum rhizome hexane extract and evaluated for the in vivo hair growth promotion activity on female Wistar rats weighing 120–150 g. The results found that pentadecane demonstrates good reduction in hair growth time, but hexane extract shows better-than-individual compound activity . 2.5.11. CNS Depressant Activity of Hedychium Spectrum Extract on Rats Ethanolic, hexane, and chloroform extracts were evaluated at a dose of 100 mg/kg body weight for CNS activity, and it was found that the extracts have CNS depressant activity using Gabapentin and Caffeine 250 mg/kg body weight as control . 2.6. Medicinal/Therapeutic Uses of Herbal Compositions 2.6.1. Antimicrobial Composition Hydro alcoholic distillate of Hedychium spicatum is used with other plants’ distillates as an antimicrobial composition in Japan . 2.6.2. Skin Protective Composition A composition of Hedychium extract has been reported to treat environmental damage to the skin, regulating firmness, tone, wrinkles, and skin texture with a cosmetically acceptable carrier. Inhibition of UV-Induced Matrix Metalloproteinase-(MMP) Epidermal equivalents derived from human epidermal keratinocytes topically treated with 0% or 0.5% w / w of Hedychium spicatum were irradiated with solar spectrum light and analysed with ELISA, which proved that compositions containing Hedychium spicatum extract provide protection against UV-induced matrix mettalloproteinase-1 (MMP-1) (the UV light in 15 MED’s effect on MMP-1: 19.3 pg/mL was reduced to 11.2 pg/mL with 0% Hedychium extract, and on MMP-1: 31.2 was reduced to 2.1 pg/mL composition with 0.5% Hedychium extract) . Prevention of Smoke-Induced Loss of Thiols in Normal Human Dermal Fibroblasts Glutathione works as a redox buffer by maintaining a balance of oxidants and antioxidants. UV exposure depletes antioxidants and glutathione, which leads to higher UVR sensitivity that causes wrinkling on the skin and environmental damage. Ten minutes of exposure to smoke reduces the thiols percentage, hence a 100 µg/mL Hedychium spicatum extract concentration afforded thiols protection in 106 ± 15.8 (mean ± SD) smoke groups . Inhibition of Nitric Oxide Production The ability of Hedychium extract to inhibit nitric oxide production was tested in LPS-stimulate murine macrophages. Nitric oxide is involved in physiological processes such as vasodilation, neurotransmission, inflammation, and growth of cancers. Nitric oxide combined with one superoxide radical produces peroxy nitrite, a highly toxic free radical. Murine macrophage RAW 264.7 was treated with Hedychium extract with concentrations of 10 to 200 µg/mL, and lipopolysaccharide from E. coli . with an IC 50 of 69.97 µg/mL . 2.6.3. Compositions Used to Darkenthe Skin The composition of Hedychium extract, some peptides and other extracts were tested for their ability to darken to skin and have been studied in in vitro and in vivo on human skin. Induced Pigmentation in Human Cell Culture Keratinocyte-melanocyte culture (Human HaCaT keratinocytes) was used to test different peptide (at 50 µM), pigment melanin and derivatives of melanin (0.0001% w / v t 1% w / v ), and plant extract Forskolin was used as a pigmentation inducer. L-3,4-Dihydroxyphenylalanine (DOPA) staining and computerized image analysis were carried out (the parameters measured were the surface area of stained material within the melanocytes and keratinocytes, the total surface area of cells in culture, and the related pigmented area). Cell viability was assayed using Alamar Blue TM , and the increased pigmentation level of a peptide + Hedychium extract (50 µM, 0.1% w / v ) composition was tested during analysis; the mean related pigmented area was 0.050, and the control mean area was 0.150 . Induced Pigmentation In Vivo Dark-skinned Yucatan micro swine were used for analysis of the increase in pigment deposition Forskolin or Coleus extract. Versus a positive control (1% w / v ); pigment increased from 1% to 5% w / v , and peptide from 250 µM to 500 µM. Hedychium spicatum extract dissolved in ethanol: propylene glycol at a ratio of 70:30 v / v indicated a strong increase in pigment deposition, and some increase in caps . Darkening of Human Skin Human skin was tested (after being obtained from patients undergoing cosmetic surgery). Human graft was treated with composition of peptide (500 µM) and soluble melanin Melasyn-100 tm (1% w / v ) in Lysosome (20 mg/mL) as a control; histological sections were evaluated for changes in pigment deposition and capped epidermal cells above the basal layer . The effect of ethanol extract of leaves and pseudo stem of Hedychium coronarium on melanogenesis in B16 cells was evaluated (through melanin titration); stimulation of melanin release was inhibited by the extract at 1 mg/mL concentration. This indicates that it may help to inhibit sun-induced pigmentary spots . 2.6.4. Synergistic Antipyretic Formulation A formulation containing Berberis aristata (15%), Tinosporacordifoia (15%), Alstoniascholaris (10%), Andrographis paniculata (10%), Hedychium spicatum (15%), preservative/sodium benzoate (0.001%), and simple syrup QS (to make the volume up to 100%) was tested in Dengue and yeast-induced pyrexia in rats. At 1 h, their raised temperature was reduced, and the significance was high ( P < (0.01). The experiment was controlled with Paracetamol 150 mg . 2.6.5. Altering the Perception of Malodor Composition containing Frankincense, Benzyl benzoate, ldehyde mixture, Amyl salicylate essential oil of Hedychium spicatum , Vanillin, Rose oil, Rose oil absolute, Ylang Ylang oil, Mexican pepper leaf oil, lignaloe wood oil . 2.6.6. Composition Containing Active Sunscreen Agent Hexane and ethyl acetate fractions were isolated over silica gel and 60–120 mesh containing p-Methoxy cinnamic acid esters from Hedychium spicatum extract. Formulations containing Hedychium spicatum extract were analyzed for SPF 13.97 using an SPF 290S analyzer; the formulations were sun protection cream, sun protection shampoo, and sun protection gel, all containing Cinnamic acid ester active fractions. Skin irritation was tested in Guinea pigs, and localized reversible dermal responses were checked without the involvement of immune response . 2.6.7. Composition Treating Tenia Infection Hedychium spicatum extract prepared by pulverized rhizome was extracted with chloroform at 60 °C. 6.5% w / w . Ethyl-p-methoxycinnamate was extracted and isolated by silica gel column chromatography using pet ether in ethyl acetate. Final crystallization was carried out using pet ether. An M.P of 48–50 °C was characterized by 1 H NMR and MS 206 (M+) . 2.6.8. Anti-TNF Alpha Activity of Hedychium Spicatum Extract and Lead Molecule TNF-α production was assayed by Lipopolysaccharide (LPS) in human peripheral blood mononuclear cells. (hPBMCs). Extract of Hedychium spicatum inhibited TNF-α production by 5–96% at a concentration of 10 µg/mL to 100 µg/mL, while Ethyl-p-methoxycinnamate inhibited TNF-α by 0–80% at a concentration of 10 µg/mL to 100 µg/mL in human polymorph nuclear cells . 2.6.9. Ointment Cream Formulation and Its In vitro Anti-Dermatophytic Activity A 10% Hedychium spicatum extract was used with other formulation ingredients w / w . Trichophyton mentagrophytes and Microsporumgypseum culture grown on a PDA agar slant at 28 °C. Extract of Hedychium spicatum (0.01 to 0.10 mg/mL), Ethyl-p-methoxycinnamate (0.01 to 0.10 mg/mL), and ointment (100, 50, 25, 10, and 5 mg/mL) were assayed against control Ketoconazole 2% in 0.5 mg/mL and Tolftate 1% in 0.05 mg/mL concentration as standard. The minimum inhibitory concentration (MIC) was found for the extract (0.04 mg/mL), and for Ethyl-p-methoxycinnamate (0.03 mg/mL), Ketoconazole 2% (0.5 mg/mL), and Tolnaflate 1% (0.05 mg/mL) . 2.6.10. Antidiabetic Activity of Extract and Composition Used to Reduce Blood Glucose Level Ethanol extract of the leaves and pseudo stem of Hedychium coronarium was tested for glucose tolerance in normal rats (with a dose of 750 mg/kg reducing blood glucose in 120 min) and mice with type-II diabetes (with a dose of 1.5 g/kg causing a significant reduction in blood glucose, with a significance value of p < 0.01). An intraperitoneal glucose tolerance test was carried out in normal mice (with a dose of 1.5 g/kg, with p < 0.01), while an insulin increase test was carried out in normal mice (with a dose of 1.5 g/kg, with p < 0.01 value). A glucose tolerance test was carried out in rats with type-I diabetes (with a dose of 1.5 g/kg, with p < 0.01), and an insulin increase test was performed on mice with type-II diabetes (with a dose of 1 g/kg, with p < 0.01). An insulin resistance test was carried out in rats with type-I diabetes (with a dose of 1 g/kg, with p < 0.01) . The water extract of leaves and the pseudo-stem of Hedychium coronarium were tested for glucose tolerance in normal rats (with a dose of 1.5 g/kg, with p < 0.01), and water–ethanol extract was tested for glucose tolerance in normal rats (with a dose of 0.8 g/kg, with p < 0.01) . 2.6.11. Anti-Inflammatory Composition in Cream Inflammatory cytokines (TNF-alpha, IL-6, and IL-1beta in pcg/mg protein) were tested in the blood serum of mice by an enzyme-linked immune-sorbent assay (ELISA), which showed the topical synergistic anti-inflammatory activity of three essential oil blends, including 60% Cymbopogon citratus oil, 20% Zenthoxylumarmatum oil, and 20% Hedychium spicatum oil, which is beneficial in inflammatory arthritis . The macrobiotic composition of Hedychium spicatum extract, along with other ingredients, shows health benefits that are effective in facial and body care for acne, dermatitis, eczema , and conditioning of the skin . Various extract- and essential oil-based formulations have been produced for the treatment of immune system disorders such as cancer and lupus . 2.6.12. Uses in Cracked Heels A cracked heel cream composition containing Hedychium spicatum extract may be effective as an anti-inflammatory and used as barrier to protect the skin . 2.6.13. Therapeutic Effect of Composition of Plant Extract of Hedychium Coronarium Root for Treatment of the Human Body A mitochondrial network in fibroblasts was irradiated with UVA (Mito-tracker staining, ATP, and NAD+/NADH titration). Cytotoxicity and viability were checked by an XTT assay. Concentrations of 0.005% and 0.01% extract were non-cytotoxic to fibroblasts . The lysosomal network in fibroblasts was irradiated with UVA (Lysotracker staining). Cytotoxicity and viability were checked by an XTT assay. Concentrations of 0.0005% and 0.01% extract (50% hydro-alcoholic root extract) were non-cytotoxic to fibroblasts . Evaluation of the effect of the root extract on human skin explants was then performed using a pollution model. The extract has the potential to fight against inflammatory stress mediators induced by pollution . The effect of β-endorphin production by normal human keratinocytes was evaluated; treatment of normal keratinocytes with extract concentrations of 0.001 to 0.005% induced stimulation of β-endorphin release . The antioxidant potential of the ethanol extract of the leaves and pseudo stem of Hedychium coronarium was evaluated in normal human keratinocytes (using ROS detection with an H2DCFDA probe). The extract showed inhibitory action on ROS production with a concentration of 0.01%, p < 0.122 . The effect of the ethanol extract of leaves and the pseudo stem of Hedychium coronarium on autophagic activity in fibroblasts was evaluated by irradiation with blue light (according to MDC assay, the autophagic activity of fibroblast cells decreases by 13–35% at concentrations of 0.001% to 0.005%) . Many chemical constituents have been identified in Hedychium species . These are diterpenes, flavonoids, sesquiterpenes and the essential oil of flowers and rhizomes, which contain aromatic compounds (listed in ); these chemical structures and constituents are shown in A and B, respectively. 2.1.1. Terpenes Furanoid Di-terpene: A 50% extract of rhizomes of Hedychium spicatum contains Furanoid diterpene used in the treatment of pain, inflammation, and stomach ailments. Purification by column chromatography with silica gel and benzene yielded Hedychenone (MP 135–136 °C, [α] D +142° in CHCl 3 , λ max 239 nm). Confirmation tests (the Liebermann–Burchard and Ehrlich tests) yielded an orange color with Acetic acid and H 2 SO 4 . shows the structures of Furanoid diterpenoids such as Hedychenone{4-[( E )-2-(furan-3-yl)ethenyl]-3,4a,8,8-tetramethyl-4a,5,6,7,8,8a-hexahydronaphthalen-1(4 H )-one} . Hydrogenation of the Hedychenone side chain at position Δ 11 with Pd/C yielded 11,12-dihydrohedychenone{4-[2-(furan-3-yl) ethyl]-3,4a,8,8-tetramethyl-4a,5,6,7,8,8a-hexahydronaphthalen-1(4 H )-one} (λ max 220, 240 nm due to Furan and Enone chromophores), Ozonolysis of Hedychenone yielded β-furaldehyde (2,4-dinitrophenylhydrazone) MP. 147 °C. Reduction of Hedychenone and 11,12-dihydrohedychenone with LAH saturation of Δ 7 double bond Hedychanone (λ max 216, [α] D +62°) and 11,12-dihydrohedychanone (λ max 216, [α] D +26°) yielded , 7-hydroxyhedychenone (13-beta-furanolabda-6-keto-7,11-dien-7-ol), MP-108–109 °C, [α] D +125°, λ max 215, 230, and 278 nm. Acetylation of 7-Hydroxyhedychenone gives its mono acetate, and hydrogenation with Pd/C yield dihydro-7-hydroxyhedychenone [α] D +0.7°, by reduction with LAH yielded 7-hydroxyhedychanone; acetylation of 7-hydroxyhedychanone obtained its acetate . The 9-hydroxyhedychenone and 7-Acetoxy Hedychenone reaction scheme is shown in . Labdane diterpene: Cytotoxic compounds Coronarin A, Coronarin B, Coronarin C, and Coronarin D were isolated from Hedychium coronarium, also used in rheumatism in Brazil. Hedychium coronarium also contains Coronarin E and Coronarin F, isolated and purified by silica gel chromatography of chloroform extract. Coronarin E (C 20 H 28 O, colourless, [α] D +22.3° was confirmed with IR and 1 H NMR, 13 CNMR data, λ max 234 nm, e = 9100, m / z 137); Coronarin F (C 30 H 46 O 3 , colourless needles, M.P.157–159 °C, [α] D +90.0 °C containing exo-methylene was confirmed in IR bands at 3080, 1640, 890 cm −1 , and its compound was confirmed by 1 H NMR and 13 C NMR) . Methanol extract of Hedychium coronarium was purified with liquid–liquid extraction with ethyl acetate and water, and both reversed-phase and ordinary-phase column chromatography were performed on an ethyl-acetate fraction using the silica gel method. Hedychilactone A (C 20 H 30 O 3 ) was isolated as a colourless liquid, λ max at 227 nm, log ε 4.08, [α] D +12.3°; Hedychilactone B (C 20 H 30 O 3 ) was isolated as colourless liquid, [λ] D +10.6°, and an IR spectrum showed an absorption band at 3496, 1750, and 1674 cm −1 . Hedychilactone C (C 20 H 30 O 4 ) was isolated as colourless liquid, with λmax at 222 nm, log ε 3.92 and [λ] D +23.8 °C . Farnesane-type sesquiterpenes Hedychium coronarium (cultivated in Japan) contain Heychiols A, Hedychiols B 8,9-diacetate, and Farnesane-type sesquiterpenes. Methanol extract of Hedychium coronarium was purified with liquid–liquid extraction with ethyl acetate and water, and both reversed-phase and ordinary-phase column chromatography were performed on an ethyl-acetate fraction using the silica gel method. Hedychiol A (C 15 H 26 O 2 ) Hedychiol B 8,9-iacetate (C 19 H 30 O 5 ) were isolated as colourless oil and [λ] D −18.8° . 2.1.2. Flavonoids Leaves of Hedychium coccineum and Hedychium coronarium contain flavanols myricetin and quercetin. Glycoside syringetin 3-rhamnoside has been identified in Hedychium stenopetalum . A flavonoid aglycone moiety was identified after acid hydrolysis of 80% methanolic leaf extract. The hydrolyzed product was identified by TLC, using a standard marker solution based on the R f value under visualization in a UV chamber . Dichloromethane and methanol (1:1) solvent was used to extract the Hedychium spicatum rhizome. Chrysin was isolated in an ethyl acetate/ether/hexane (25:14:61) fraction of silica gel (100–200 mesh) column chromatography . Chloroform extract of the Hedychium spicatum rhizome was chromatographed over silica gel (60–120 mesh), and Teptochrysin containing fraction F2 was further purified by column chromatography over silica gel (100–200 mesh) using Methanol:Chloroform (7:93) solvent, and was characterized by IR, MS, 1D and 2D NMR . 2.1.3. Glycoside Syringetin-3-rhamnoside was identified in Hedychium stenopetalum . Hedychium coronarium flowers’ extraction in 80% aqueous acetone and chloroform yielded Coronalactosides I, obtained as a white powder . 2.1.4. Xanthone Hedychium gardnerianum rosc. rhizome was extracted with successive extraction methods using hexane and acetone. The extract was purified over silica gel column chromatography using a chloroform and methanol gradient mixture, which was further purified using silica gel prep TLC and further crystallization in methanol, yielding 3-(2-Hydroxyethoxy) xenthone, 1-Hydroxyxanthone, Oplopanone. and Salicylic acid (2-Hydroxybenzoic acid), with 1-Hydroxyxanthone having MP 143–145 °C and UV λmax (MeOH, nm) 230, 251, 297, 361 . 2.1.5. Saponins Water and alcoholic extract passed a foaming test for Saponins, and a physiochemical test of Hedychium spicatum , a rhizome powdered drug with individual compounds not yet identified. Furanoid Di-terpene: A 50% extract of rhizomes of Hedychium spicatum contains Furanoid diterpene used in the treatment of pain, inflammation, and stomach ailments. Purification by column chromatography with silica gel and benzene yielded Hedychenone (MP 135–136 °C, [α] D +142° in CHCl 3 , λ max 239 nm). Confirmation tests (the Liebermann–Burchard and Ehrlich tests) yielded an orange color with Acetic acid and H 2 SO 4 . shows the structures of Furanoid diterpenoids such as Hedychenone{4-[( E )-2-(furan-3-yl)ethenyl]-3,4a,8,8-tetramethyl-4a,5,6,7,8,8a-hexahydronaphthalen-1(4 H )-one} . Hydrogenation of the Hedychenone side chain at position Δ 11 with Pd/C yielded 11,12-dihydrohedychenone{4-[2-(furan-3-yl) ethyl]-3,4a,8,8-tetramethyl-4a,5,6,7,8,8a-hexahydronaphthalen-1(4 H )-one} (λ max 220, 240 nm due to Furan and Enone chromophores), Ozonolysis of Hedychenone yielded β-furaldehyde (2,4-dinitrophenylhydrazone) MP. 147 °C. Reduction of Hedychenone and 11,12-dihydrohedychenone with LAH saturation of Δ 7 double bond Hedychanone (λ max 216, [α] D +62°) and 11,12-dihydrohedychanone (λ max 216, [α] D +26°) yielded , 7-hydroxyhedychenone (13-beta-furanolabda-6-keto-7,11-dien-7-ol), MP-108–109 °C, [α] D +125°, λ max 215, 230, and 278 nm. Acetylation of 7-Hydroxyhedychenone gives its mono acetate, and hydrogenation with Pd/C yield dihydro-7-hydroxyhedychenone [α] D +0.7°, by reduction with LAH yielded 7-hydroxyhedychanone; acetylation of 7-hydroxyhedychanone obtained its acetate . The 9-hydroxyhedychenone and 7-Acetoxy Hedychenone reaction scheme is shown in . Labdane diterpene: Cytotoxic compounds Coronarin A, Coronarin B, Coronarin C, and Coronarin D were isolated from Hedychium coronarium, also used in rheumatism in Brazil. Hedychium coronarium also contains Coronarin E and Coronarin F, isolated and purified by silica gel chromatography of chloroform extract. Coronarin E (C 20 H 28 O, colourless, [α] D +22.3° was confirmed with IR and 1 H NMR, 13 CNMR data, λ max 234 nm, e = 9100, m / z 137); Coronarin F (C 30 H 46 O 3 , colourless needles, M.P.157–159 °C, [α] D +90.0 °C containing exo-methylene was confirmed in IR bands at 3080, 1640, 890 cm −1 , and its compound was confirmed by 1 H NMR and 13 C NMR) . Methanol extract of Hedychium coronarium was purified with liquid–liquid extraction with ethyl acetate and water, and both reversed-phase and ordinary-phase column chromatography were performed on an ethyl-acetate fraction using the silica gel method. Hedychilactone A (C 20 H 30 O 3 ) was isolated as a colourless liquid, λ max at 227 nm, log ε 4.08, [α] D +12.3°; Hedychilactone B (C 20 H 30 O 3 ) was isolated as colourless liquid, [λ] D +10.6°, and an IR spectrum showed an absorption band at 3496, 1750, and 1674 cm −1 . Hedychilactone C (C 20 H 30 O 4 ) was isolated as colourless liquid, with λmax at 222 nm, log ε 3.92 and [λ] D +23.8 °C . Farnesane-type sesquiterpenes Hedychium coronarium (cultivated in Japan) contain Heychiols A, Hedychiols B 8,9-diacetate, and Farnesane-type sesquiterpenes. Methanol extract of Hedychium coronarium was purified with liquid–liquid extraction with ethyl acetate and water, and both reversed-phase and ordinary-phase column chromatography were performed on an ethyl-acetate fraction using the silica gel method. Hedychiol A (C 15 H 26 O 2 ) Hedychiol B 8,9-iacetate (C 19 H 30 O 5 ) were isolated as colourless oil and [λ] D −18.8° . Leaves of Hedychium coccineum and Hedychium coronarium contain flavanols myricetin and quercetin. Glycoside syringetin 3-rhamnoside has been identified in Hedychium stenopetalum . A flavonoid aglycone moiety was identified after acid hydrolysis of 80% methanolic leaf extract. The hydrolyzed product was identified by TLC, using a standard marker solution based on the R f value under visualization in a UV chamber . Dichloromethane and methanol (1:1) solvent was used to extract the Hedychium spicatum rhizome. Chrysin was isolated in an ethyl acetate/ether/hexane (25:14:61) fraction of silica gel (100–200 mesh) column chromatography . Chloroform extract of the Hedychium spicatum rhizome was chromatographed over silica gel (60–120 mesh), and Teptochrysin containing fraction F2 was further purified by column chromatography over silica gel (100–200 mesh) using Methanol:Chloroform (7:93) solvent, and was characterized by IR, MS, 1D and 2D NMR . Syringetin-3-rhamnoside was identified in Hedychium stenopetalum . Hedychium coronarium flowers’ extraction in 80% aqueous acetone and chloroform yielded Coronalactosides I, obtained as a white powder . Hedychium gardnerianum rosc. rhizome was extracted with successive extraction methods using hexane and acetone. The extract was purified over silica gel column chromatography using a chloroform and methanol gradient mixture, which was further purified using silica gel prep TLC and further crystallization in methanol, yielding 3-(2-Hydroxyethoxy) xenthone, 1-Hydroxyxanthone, Oplopanone. and Salicylic acid (2-Hydroxybenzoic acid), with 1-Hydroxyxanthone having MP 143–145 °C and UV λmax (MeOH, nm) 230, 251, 297, 361 . Water and alcoholic extract passed a foaming test for Saponins, and a physiochemical test of Hedychium spicatum , a rhizome powdered drug with individual compounds not yet identified. The physiochemical properties of essential oil obtained from Hedychium spicatum are given in . The oil contains alpha-pinene, beta-pinene, Limonene, 1:8 Cineole, Linalool found in major amounts, and Camphore, Linalyl acetate, Terpineol, Borneol, Caryophyllene, r-Cadinene, humulene, Terpineolene and P-Cymene in low quantities. These compounds were studied by TLC and GLC methods . The chemical composition of the essential oil of Hedychium spicatum rhizomes by gas chromatography shows that the essential oil contains Caryophyllene, monoterpenes, sesquiterpenes, and sesquiterpene alcohol . The essential oil was isolated from chopped rhizome with steam distillation, and the distillate was further saturated with NaCl, then further extracted with petroleum ether (60–80 °C) and hexane. Isolation of compounds from the essential oil at a boiling point of 60–80 °C yielded a (0.3%) compound using fractional distillation, with a spinning band distillation assembly based on the boiling point. 1,8-cineol (boiling point 62 °C), -linalool (boiling point 120 °C), five alcohol, -Elemol (26%), (−)-epi-10-Gama-eudesmol (19%), and (30%) of (−)-alpha-Cadinol, alpha-Eudesmol, -beta-Eudesmol mixture were isolated. Bottini et al. reported the essential oil constituents isolated from different Hedychium species , which are listed in and . 2.3.1. Synthesis of Hedychenone, Yunnacoronarin A from Larixol Larixol is a Labdane isolated from Larch oleoresin, converted into Aldehyde using a 3-step transformation using osmium tetroxide/sodium periodate oxidation. Free alcohol is protected by silyl ether followed by elimination of tertiary acetate using 2,4,6-collidine, which is identified by the presence of vinylic protons. 3-furyl lithium is used for addition reaction to the aldehyde group and converted into a mixture of Epimeric alcohols. The Epimeric alcohols are mesylated in the presence of 2,6-lutidine, which produces an elimination product with trans configuration. Cleavage of silyl ether followed by oxidation produces an intermediate which produces Hedychenone on isomerization, and upon reduction with di-isobutyl, aluminum hydride yields Yunnacoronarin A. The reaction scheme is shown in . 2.3.2. Synthesis of Yunnacoronarin D from Hedychenone Yunnacoronarin D is synthesized from Hedychenone by oxidation of the allylic methyl group and reduction of the aldehyde group. Photo-oxidation of Hedychenone also produces Yunnacoronarin ; the reaction scheme is shown in . 2.3.3. Synthesis of Hedychenone and Hedychilactone-B from a Hindered Diene System A diene system undergoes [4+2] cycloaddition with allene carboxylate, which produces an intermediate. The [2+2] cycloadduct of this intermediate yields a cyclo-butane intermediate. The cyclo-butane intermediate provides a mixture of Diels-alder adducts; they are Exo and Endo isomers. The Exo isomer is asymmetrically converted to Hedychenone by reduction, oxidation, olefination, and de-sialylation in the presence of 3-furyl ylide . The Ester cycloadduct intermediate is reduced to give intermediate aldehyde, which reacts with tri-phenylphosphoranylidene lactone in methane dichloride to yield Hedychilactone B ; the reaction scheme is shown in . 2.3.4. Synthesis of Derivative of Hedychenone Hedychenone converted into 6,7-dihydro Hedychenone in the presence of 10% Pd/C in ethanol yielded 6,7-Dihydro Hedychenone. 6,7-Dihydro Hedychenone further reduced with LiAlH 4 in THF at 0° C yielded 6,7,11,12-tetra Hydro Hedychenone. Reduction of Hedychenone with aluminium mercury alloy yielded a dimerization product. Ozonolysis of Hedychenone with O 3 in DCM at −10 °C produced an aldehyde product ; the reaction scheme is shown in . Hedychenone undergoes epoxidation with m-CPBA in DCM at room temperature. The synthesised compound was further characterized by NMR, Mass and FTIR. The SAR indicates that the Furanoid ring system has good cytotoxicity, rather than the Decalone nucleus. Dimerization through C-8 was found to significantly enhance the cytotoxic activity of Hedychenone. 2.3.5. Other Reactions Enzymatic hydrolysis of Coronalactosides I with Naringinase Coronalactoside-I in 0.1 M acetate buffer (pH 3.8, 1.0 mL) was treated with Naringinase solution at 40 °C for 24 h, thereby forming Coronalactone. Workup and chromatography was performed on the mixture over reverse-phase silica gel. Coronalactone was found to be a colourless oil, and [α] D -12.9°. The reaction scheme is shown in . Labdane diterpenes Hedychenone was converted to Yunnacoronarin A, Yunnacoronarin D, Hedychilactone B, 6,7-Dihydrohedychenone, 6,7,11,12-Tetra hydro Hedychenone, Aldehyde product of Hedychenone, and Dimer of Hedychenone were found to be more cytotoxic than Hedychenone. Coronalactone had an active moiety of Coronalactoside, and hepatoprotective activity. Larixol is a Labdane isolated from Larch oleoresin, converted into Aldehyde using a 3-step transformation using osmium tetroxide/sodium periodate oxidation. Free alcohol is protected by silyl ether followed by elimination of tertiary acetate using 2,4,6-collidine, which is identified by the presence of vinylic protons. 3-furyl lithium is used for addition reaction to the aldehyde group and converted into a mixture of Epimeric alcohols. The Epimeric alcohols are mesylated in the presence of 2,6-lutidine, which produces an elimination product with trans configuration. Cleavage of silyl ether followed by oxidation produces an intermediate which produces Hedychenone on isomerization, and upon reduction with di-isobutyl, aluminum hydride yields Yunnacoronarin A. The reaction scheme is shown in . Yunnacoronarin D is synthesized from Hedychenone by oxidation of the allylic methyl group and reduction of the aldehyde group. Photo-oxidation of Hedychenone also produces Yunnacoronarin ; the reaction scheme is shown in . A diene system undergoes [4+2] cycloaddition with allene carboxylate, which produces an intermediate. The [2+2] cycloadduct of this intermediate yields a cyclo-butane intermediate. The cyclo-butane intermediate provides a mixture of Diels-alder adducts; they are Exo and Endo isomers. The Exo isomer is asymmetrically converted to Hedychenone by reduction, oxidation, olefination, and de-sialylation in the presence of 3-furyl ylide . The Ester cycloadduct intermediate is reduced to give intermediate aldehyde, which reacts with tri-phenylphosphoranylidene lactone in methane dichloride to yield Hedychilactone B ; the reaction scheme is shown in . Hedychenone converted into 6,7-dihydro Hedychenone in the presence of 10% Pd/C in ethanol yielded 6,7-Dihydro Hedychenone. 6,7-Dihydro Hedychenone further reduced with LiAlH 4 in THF at 0° C yielded 6,7,11,12-tetra Hydro Hedychenone. Reduction of Hedychenone with aluminium mercury alloy yielded a dimerization product. Ozonolysis of Hedychenone with O 3 in DCM at −10 °C produced an aldehyde product ; the reaction scheme is shown in . Hedychenone undergoes epoxidation with m-CPBA in DCM at room temperature. The synthesised compound was further characterized by NMR, Mass and FTIR. The SAR indicates that the Furanoid ring system has good cytotoxicity, rather than the Decalone nucleus. Dimerization through C-8 was found to significantly enhance the cytotoxic activity of Hedychenone. Enzymatic hydrolysis of Coronalactosides I with Naringinase Coronalactoside-I in 0.1 M acetate buffer (pH 3.8, 1.0 mL) was treated with Naringinase solution at 40 °C for 24 h, thereby forming Coronalactone. Workup and chromatography was performed on the mixture over reverse-phase silica gel. Coronalactone was found to be a colourless oil, and [α] D -12.9°. The reaction scheme is shown in . Labdane diterpenes Hedychenone was converted to Yunnacoronarin A, Yunnacoronarin D, Hedychilactone B, 6,7-Dihydrohedychenone, 6,7,11,12-Tetra hydro Hedychenone, Aldehyde product of Hedychenone, and Dimer of Hedychenone were found to be more cytotoxic than Hedychenone. Coronalactone had an active moiety of Coronalactoside, and hepatoprotective activity. Various Hedychium species are used to treat ailments and diseases in traditional herbal medication. Extracts, decoctions, infusions, macerates, oils and squeezed liquid forms are used for different administrations, as listed in . Hedychium species possess various pharmacodynamic activities based on different activity that has been carried out by researchers. contains the pharmacodynamic activities of the extract, and the chemical constituents present in extract. 2.5.1. Anti-Inflammatory and Analgesic Activity In a personal communication report by Dhawan, B.N, (Pharmacology division, CDRI, Lucknow) it was reported that the ethanolic extract of the Hedychium spicatum plant rhizome has anti-inflammatory properties . In Vitro Anti-Inflammatory and Analgesic Effect Shrotriya et al. evaluated Hedychium coronarium and successive rhizome extracts of Hexane, Chloroform, and Methanol for analgesic activity. Acetic Acid-Induced Writhing Test for Analgesic Effect This test found that chloroform and methanolic extract of 400 mg/kg bw both inhibited writhing reflux (27.23% by chloroform and 40.59% by methanolic extract) in eight groups of pre-screened Swiss albino mice with significant p < 0.001 and 50 mg/body weight of aminopyrine as control. Inhibition was measured using formula : (1) % Inhibition of writhing = 1 − W t W c × 100 Radiant Heat Tail-Flick Method for Analgesic Activity Tail-flick latency was assessed using morphine 2 mg/kg body weight as control and an analgesiometer. The percentage of elongation measures showed significant response for radiant heat, with p < 0.001 . Carrageenan-Induced Rat Hind Paw Edema Animal Model for Anti-Inflammatory Study This model was used to estimate acute inflammation. Different concentrations of hexane, chloroform, methanol extract were tested against PBZ (Phenylbutazone), with an 80 mg/kg dose and percentage inhibition calculated using the following formula: (2) % Inhibition of paw edema = 1 − V t V c × 100 where V c and V t represent paw volume. The control study found that chloroform and the methanolic extract of Hedychium coronarium rhizome extract exert significant p < 0.01 inhibition of paw volume, at 27.46% and 32.39%, respectively, with 400 mg/kg body weight; meanwhile, the control, with 80 mg/kg, showed a 42.54% inhibition with a significance value of p < 0.001 . Nitric Oxide Inhibitory Effect In Vitro Inhibitory Assay Oof NO Produced in LPS and IFN-g-Stimulated 264.7 Macrophages Labdane diterpenes, Hedychenoids A, Hedychenoids B, Hedychenone, Forrestin A, and Villosin were isolated from the rhizomes of Hedychium yunnanense by ethanol maceration, and further purified by liquid–liquid extraction using ethyl acetate and water, then butanol. Hedychenoids B and Villosin had an inhibitory effect, with IC 50 values of 6.57 ± 0.88 and 5.99 ± 01.20 µg/mL respectively . Inhibition of NO Production and iNOS Induction in LPS-Activated Mouse Peritoneal Macrophages NO is a free radical produced by oxidation of L-arginine by NO synthase (NOS). NO is involved in various processes, e.g., vasodilation, nonspecific host definition, ischemic reperfusion injury, and chronic and acute inflammation which respond to pro-inflammatory agents such as interleukin-1 β, tumour necrosis factor-α, and LPS in macrophages, endothelial cells, and smooth muscle cells. Hedychilactone A, Hedychilactone B, Hedychilctone C, Coronarin D, Coronarin D methyl ether, Coronarine E, Labda-8(17), 13(14)-dien-15,16-olide, Hedychenone, 7-Hydroxihedychenone, -Nerolidol, Hedychiol A, Hedychiol B 8,9-diacetate, and LNMMA were tested for inhibitory effects, and the study found that a concentration of 10 µM–100 µM caused significant inhibition, with p < 0.05 to p < 0.01 . shows the mechanism of NO production. Inhibition of Acetic Acid-Induced Vascular Permeability in Mice (Anti-Inflammatory) Histamine and serotonin play an important role in the vascular permeability induced by acetic acid, an exudative state of inflammation. The anti-inflammatory effects of methanolic extract of Hedychium coronarium and some labdane diterpenes (Coronarin D, Coronarin D methyl ether) were tested, and the study found that methanol extract (Dose 250–500 mg/kg), Coronarin D (Dose 25–50 mg/kg), and Coronarin D methyl ether (Dose 25–50 mg/kg) had a significance of p < 0.05– p < 0.01 . Inhibition of Released Beta-Hexosaminidase from RBL-2H3 Cells In Vitro (Antiallergic) The 12 compounds of Hedychiol A, Hedychiol B 8,9-diacetate, Hedychilactone A, B, C, Coronarin D, Coronarin D methyl ether, Coronarin E, Labda-8(17), 13(14)-dien-15,16-olide, Hedychenone, 7-hydroxyhedychenone, and -Nerolidol from Hedychium coronarium were examined for an antiallergic reaction by testing their inhibitory effect on the release of beta-Hexosaminidase from RBL-2h3 cells. The study found that the compounds have inhibitory concentrations from 10 µM to 100 µM, which were significantly different from the control, with p < 0.05 to p < 0.01 . Coronarin G, Coronarin H, Coronarin I, Coronarin D, Coronarin D methyl ether, Hedyforrestin C, (E)-nerolidol, b-sitosterol, daucosterol, and stigmasterol isolated from Hedychium coronarium were evaluated. The inhibitory effect of the compounds was tested on Lipopolysaccharide-stimulated production of pro-inflammatory cytokines in bone marrow-derived dendritic cells. The compounds Coronarin G, Coronarin H, and Hedyforrestin C were significant inhibitors of LPS-stimulated TNF-α, IL-6, and IL-12 p40 production, with IC 50 ranging from 0.19 ± 0.11 to 10.38 ± 2.34 µM, and the other compounds are cytotoxic . shows stimulation of TNF-α. Hedycoronen A, Hedycoronen B, labda-8(17),11,13-trien-16,15-olide, 16-hydroxyl-abda-8(17),11,13-trien-15,16-olide, Coronarin A, and Coronarin E were isolated from Hedychium coronarium rhizome extract with methanol, and further purification was carried out through successive liquid–liquid extractions with water, chloroform, and water and ethyl acetate, followed by chromatography over silica gel. Hedycoronen A and Hedycoronen B were found to have a potent inhibitory effect on LPS-stimulated interleukin-6 (IL-6) and IL-12 p40, with IC 50 ranging from 4.1 ± 0.2 to 9.1 ± 0.3 μM. Hedycoronen A and Hedycoronen B were found to have moderate inhibitory activity on tumor necrosis factor-α (TNF-α) production, with IC 50 values of 46.0 ± 1.3 and 12.7 ± 0.3 μM . Hedychicoronarin, Peroxycoronarin D, 7β-hydroxycalcaratarin A, (E)-7β-hydroxy-6-oxo-labda-8(17),12-diene-15,16-dial, Calcaratarin A, Coronarin A, Coronarin D, Coronarin D methyl ether, Coronarin D ethyl ether, (E)-labda-8(17),12-diene-15,16-dial, ergosta-4,6,8(14),22-tetraen-3-one, a mixture of β-sitostenone and β-stigmasta-4,22-dien-3-one, 6β-hydroxystigmast-4-en-3-one, 6β-hydroxystigmasta-4,22-dien-3-one, and a mixture of stearic acid and palmitic acid were isolated from Hedychium coronarium . The inhibitory effect of the compounds was tested on superoxide radical anion generation. Elastase release by human neutrophils was evaluated in response to fMet-Leu-Phe/cytochalasin B. Compounds 7β-hydroxycalcaratarin A and (E)-7β-hydroxy-6-oxo-labda-8(17),12-diene-15,16-dial, calcaratarin A, (E)-labda-8(17),12-diene-15,16-dial, and ergosta-4,6,8(14),22-tetraen-3-one had an inhibitory concentration of IC 50 < 6.17 µg/mL . 2.5.2. Antioxidant Activity Chan et al., 2008 carried out estimation of total phenolic content and free radical-scavenging activities using a Folin–Ciocalteu, and DPPH radical scavenging assay. The methanolic extract of leaves of Hedychium coronarium (from the Lake Gardens of Kuala Lumpur, Malaysia) were used for the study. The methanolic extract obtained from rhizomes of Hedychium spicatum was found to have potent antioxidant activity . The total phenolic content was estimated using the gallic acid calibration equation y = 0.0111 × 0.0148 (R 2 = 0.9998), and the total phenolic content of Hedychium coronarium was 820 ± 55 mg GAE/100 g, with significance of p < 0.05. The DPPH radical scavenging activity was calculated as IC 50 and expressed as the ascorbic acid equivalent antioxidant capacity (AEAC), at IC 50 = 0.00387 mg/mL, whereas Hedychium coronarium had a capacity of 814 ± 116 mg AA/100 g, with significance of p < 0.05. AEAC (mg AA/100 g) = IC 50(ascorbate) /IC 50(sample) ·10 5 . Essential oil from a Hedychium gardnerianum Sheppard ex Ker-Gawl leaf was evaluated for DPPH antioxidation activity, and good antioxidant activity was found against ascorbic acid and BHT, as standard . 2.5.3. Anti-Microbial Activity and Anti-Fungal Activity Anti-Microbial Activity and Anti-Fungal Activity by Disk Diffusion Method Aqueous, methanol, ethanol, acetone, and hexane extracts of Hedychium spicatum rhizome were evaluated against B. subtilis , S. aureus , M. luteus , E. coli , A. flavus , A. fumigatus , M. gypseum , and C. albicans, and the zone of inhibition was recorded in (mm) at different doses (mg/disc). The results found that the extracts were active against B. subtilis , M. luteus , E. coli , A. flavus , and C. albicans, and not active against S. aureus , A. fumigatus , and M. gypseum . The essential oil of Hedychium coronarium was obtained through hydro distillation, and the composition of the essential oil was identified by gas chromatography combined with mass spectroscopy with a flame ionization detector; the major components were B-pinene, eucalyptol, linalool, Coronarin-E, etc. The essential oil exhibited DPPH radical-scavenging activities, and also inhibited C. albicans and F. oxysporum . Bisht, G.S et al., 2006 carried out an anti-microbial study for petroleum ether, benzene, chloroform, ethyl acetate, acetone, ethanol, aqueous extract, and essential oil from Hedychium spicatum rhizome. Dimethyl sulfoxide (DMSO) was used as a dilunt for the extract, and Tween-20 was used as the diluent for essential oil. The following bacterial strains: Bacillus cereus G , Staphylococcus aureus (KI-1A) G , Staphylococcus aureus G , Alcaligenes faecalis G (−), Escherichia coli G (−), Escherichia coli (MTCC 1687) G (−), Klebsiella pneumoneae G (−), Pseudomonas aeruginosa (MTCC 424) G (−), Salmonella typhi G (−), Shigella dysenterae G (−) and fungal strain i.e., Alternaria saloni , Aspergillus fummigatus , Aspergilus flavus , Aspergillus niger , Candida albicans (MTCC 227), Fusarium oxysporum , Mucarracemosus , Peniciliummonotricales , Penicilium spp., Rhizopus stolonifer , Trichoderma viride , and Trichodermalignorum, were used. The study found that the concentrations of anti-microbial and anti-fungal compounds were 20 mg/disc for extract and 500 µL/disc for essential oil, respectively. Gentamycin (10 µg/disc), penicillin (10 unit/disc), Vancomycin (30 µg/disc), and Methicillin (5 µg/disc) were used as standard anti-microbial agents, and Cyclohexamide (30 µg/disc) was used as a fungicide, using a Petri dish diffusion/agar diffusion test . S. Joshi et al. carried out an antimicrobial assay using a Petri dish diffusion method, testing the rhizome essential oil (50 µL/disc) of Hedychium ellipticum , Hedychium aurantiacum , hedychium coronarium , and Hedychium spicatum and control Amikacin, Ciprofloxacin, Ampicillin, Gentamycin, and Tetracycline on bacterial strains S. aureus , Sal. Enterica , Pasteurella multocida , Shigella flexneri and Escherichia coli . The minimum inhibitory concentration of essential oil was found to range from 0.97 to 62.5 µL/mL, depending on the susceptibility of the tested organism . In Vitro Antimicrobial Activity Using Agar Diffusion/Disk Diffusion Method A Mueller–Hinton agar plate was cultured with microbial broth culture to study the zone of inhibition, using Hedychium coronarium rhizome essential oil (Contains Mono-terpenes, Di-terpenes, and sesqui-terpenes), by cylinder plate method. The plates were incubated for 24 h at 37 °C for bacteria and 24–48 h at 28 °C for fungi. The bacteria Bacillus subtilis and Pseudomonas aeruginosa and the yeast-like fungus Candida albicans and Trichoderma sp. (Dermatophyte) were studied. The study found that the essential oil of rhizomes has a better inhibitory effect against Trichoderma sp. and Candida albicans than against Bacillus subtilis and Pseudomonas aeruginosa . 2.5.4. Anthelmintic Property Aqueous, hydro-ethanolic, hydro-methanolic, and methanol extract were evaluated for activity; the study found that methanolic extract of Hedychium spicatum was as effective as Thiabendazole at 2%, 4%, and 6% concentrations . Caenorhabditis Elegans Mobility Test for Anthelmintic Activity Lima A et al., 2021 carried out anthelmintic activity using adult C. elegans , both susceptible (wild-type and Bristol N2), and Ivermectin-resistant; a balance saline solution was used in 24-well plates for treatments with nematodes. The concentration of Hedychuim coronarium rhizome essential oil and standard monoterpenes, i.e., α-Pinene, β-Pinene, (S)-(−) Limonene, (R)- Limonene 1,8-Cineole, and p -Cymene, was found to range from 0.009 to 10 mg/mL, diluted in DMSO 1%. After 24 h at 24° C, mortality was evaluated with a negative control mixture of M-9 solution and Ivermectin as positive control. The study found that the essential oil of Hedychium coronarium rhizomes had an IC 50 concentration of 0.082 mg/mL (IC 95 concentration 0.058–0.117 mg/mL) for the Bristol N2 strain, and an inhibitory concentration of IC 50 of 0.82 mg/mL (IC 95 0.556–1.2 mg/mL) for the Ivermectin-resistant strain, with a significance value of p < 0.05 . 2.5.5. Anti-Histaminic, Mast Cell-Stabilizing and Bronchodilator Effect An in vitro study of hydroalcoholic extract of root composition containing Hedychium spicatum root was carried out to examine its antihistaminic effect (with Histamine dihydrochloride as control) on Guinea pigs, with a composition of 50 mg/kg of Hydroalcoholic extract. It effectively works as a preventive-type antagonist. When investigating mast cell stabilization (Ketotifen fumarate as control) on rats, a 1000 µg/mL concentration composition produced 54–58% inhibition of mast cell degranulation, with a significance value of p < 0.001. The bronchodilatory effect of the extract composition on histamine-induced bronchospasm (80–86%) was investigated in guinea pigs; a 200 mg/kg–500 mg/kg composition increased pre-convulsion time by 27–36%, with a significance value of p < 0.001 . 2.5.6. Cytotoxic Activity Sesquiterpenes isolated from Hedychium spicatum (Eudesma-4(15)-ene-β-11diol, crytomeridiol, β-Udesmol, 3-Hydroxy-β-eudesmol, Mucrolidin, Oplapanone, α-Terpineol, Elemol, Dehydrocarissone, Δ7-β-Eudesmol, Opladiol, Hydroxycryptomeridiol, β-Caryophyllene oxide, Coniferaldehyde and Ethylferulate) were examined through inhibitory their effects against A-549, B-16, Hela, HT-29, NCI-H460, PC-3, IEC-6 and L-6 cancer cell lines. The results found that the compounds had potent cytotoxic activity, with an IC 50 value of 0.3 μg/mL and 1.80 μg/mL . Cytotoxic Screening Test A chloroform extract of rhizome of Hedychium coronarium eluted over silica gel column chromatography and seven fractions were evaluated for cytotoxicity, which was tested by a total cell packed volume method using Sarcoma 180 ascites in mice. (E)-λ-8(17),12-di Coronarin A (IC 50 = 1.65), Ene-15,16-dial (IC 50 = 18.5), Coronarin B (IC 50 = 2.70), Coronarin C (IC 50 = 17.5), and Coronarin D (IC 50 = 17.0), were also tested by an inhibition of colony formation method using Chinese hamsters. V-79 cells’ cytotoxicity was determined by T(the number of stained colonies of test groups)/C(the dose of the control group × 100 values, or the IC 50 drug concentration that inhibits colony growth by 50%) . In Vitro Cytotoxicity Assay Labdane diterpenes, Hedychenoids A, Hedychenoids B, Hedychenone, Forrestin A, and Villosin were isolated from the rhizomes of Hedychium yunnanense by ethanol maceration, and further purified by liquid–liquid extraction using Ethyl acetate and water, then Butanol. The compounds were tested using an SRB method on SGC-7901 (the human gastric cancer cell line) and HELA (human cervical carcinoma). The study found that Hedychenoids B, Hedychenone, and Villosin had cytotoxicity against SGC-7901, with IC 50 values of 14.88 ± 0.52, 7.08 ± 0.21 and 7.76 ± 0.21 µg/mL, and against HELA, with IC 50 values of 9.76 ± 0.48 and 13.24 ± 0.63 µg/mL, respectively . In Vitro Cytotoxic Study of Hedychenone and Its Analogues MCF-7 (breast cancer), HL-60 (human promyelocytic leukemia), CHO (Chinese hamster ovary), A-375 (Human malignant melanoma), and A-549 (Human lung carcinoma) cell lines were studied . Hedyforrestin D, 15-Ethoxy-hedyforrestin D, Yunnacoronarin A, Yunnacoronarin B and Yunnacoronarin C were tested for cytotoxicity against the lung adenocarcinoma cell A549, and leukemia cells K562 through an MTT assay. The study found that Yunnacoronarin A and Yunnacoronarin B have good activity, with IC 50 values of 0.92 and 2.2 µM. The unsaturated lactone group had an important role in the anti-tumor activity against human lung adenocarcinoma . Hexane extract of Hedychium coronarium -derived compounds 6-oxo-7,11,13-labdatrien-17-al-16,15-olide, 7,17-dihy-droxy-6-oxo-7,11,13-labdatrien-16,15-olide, Coronarin D , 7 Coronarin C, 7 Coronarin D methyl ether,15-Cryptomeridiol,16-Hedychenone,13, 6-oxo-7,11,13-labdatri-ene-16,15-Olide,12-pacovatinin A,17,4-Hydroxy-3-methoxy cinnamaldehyde, 18, and 4-Hydroxy-3-methoxy ethyl cinnamate were tested against A-549 (lung cancer), SK-N-SH (human neuroblastoma), MCF-7 (breast cancer) and HELA (cervical cancer) cell lines, showing moderate cytotoxic activity , and antineoplastic activity against brain cancer. The antiproliferative activity of Coronarin D against Glioblastoma cell line U–251 was reported . 2.5.7. Ameliorating Potential The protective effect of Hedychium spicatum rhizome powder with concentrations of 4000 ppm and 2000 ppm was tested against 250 ppm IC 50 of Indoxacarb-induced toxicity on a group of cockerels. The ameliorative effect of Hedychium spicatum root powder and its ability to restore the gene activities and expression of antioxidant, biotransformation, and immune system genes were demonstrated in cockerels fed Indoxacarb . 2.5.8. Hepatoprotective Effect Hepatoprotective Effect on D-GalN-Induced Cytotoxicity in Primary Cultured Mouse Hepatocytes S. Nakamura et al. carried out a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) colorimetric assay in primary cultured mouse hepatocytes. Hepatocytes were isolated by the collagen perfusion method. Some 80% Aqueous extract, Coronarins B, Coronarins C, Coronarins D, 15-Hydroxylabda-8(17), 11,13-trien-16,15-olide, 16-Formyllabda-8(17),12-dien-15,11-olide, Ferullic acid, and Silybin were tested. During the test, Formazone was produced. The optical density of Formazone solution at 562 nm (reference: 660 nm) was measured by microplate reader. The percentage inhibition was calculated by the following formula: [OD sample−OD control/OD normal−OD control] × 100. The study concluded that 80% Aqueous acetone extract of Hedychium coronarium flower and other chemical constituents had a hepatoprotective effect, with a significant p < 0.01 value of percentage inhibition . The expression of hepatic genes associated with biotransformation, antioxidant, and immune systems in WLH cockerels fed indoxacarb was evaluated, and a protective effect of Hedychium spicatum root extract was found. The extract prevents changes in expression of antioxidant, biotransformation, and immune system genes . shows inhibition of D-GalN-induced cytotoxicity. Compounds showing hepatoprotective effect through inhibition of D-GalN-induced cytotoxicity: Coronarins B, Coronarins C, Coronarins D, 15-Hydroxylabda-8(17), 11,13-trien-16,15-olide, 16-Formyllabda-8(17),12-dien-15,11-olide, and Ferullic acid. Compounds showing anti-inflammatory effect through inhibition of lPs-induced NO production: Hedychenoids A, Hedychenoids B, Hedychenone, Forrestin A, Villosin, Hedychilactone A, Hedychilactone B, Hedychilctone C, Coronarin D, Coronarin D methyl ether, Coronarine E, Labda-8(17), 13(14)-dien-15,16-olide, Hedychenone, 7-Hydroxihedychenone, -Nerolidol, Hedychiol A, and Hedychiol B 8,9-diacetate. Compounds showing anti-allergic effect through inhibition of TNF-α-induced cytotoxicity: Coronarin G, Coronarin H, Coronarin I, Coronarin D, Coronarin D methyl ether, Hedyforrestin C, (E)-Nerolidol, β-Sitosterol, Daucosterol, and Stigmasterol. 2.5.9. In Vitro Pediculicidal Activity V. Jadhav et al. carried out an in vitro pediculicidal activity. The hydro distillate essential oil of Hedychium spicatum rhizomes has been tested on P.humanuscapitis (Phthiraptera: Pediculidae). Essential oil of 1, 2 and 5% concentration was blended with coconut oil as a base. A 1% permethrin-based preparation was used as a positive control. Lice clasping hair strands were immersed completely in the test solutions and the marketed preparation for 1 min. Vital signs were measured after 5, 10, 15, 20, 30, 45, 60, 90, and 120-min; lice were judged to be dead if a vital sign was zero. Mortality was observed as 85%, 80%, and 75% for the 5%, 2%, and 1% Hedychium spicatum essential oil; after 2 h, 100% mortality was observed, which is as significant as 1% permethrin preparation . 2.5.10. Hair-Growth Promotion Activity Pentadecane and Ethyl para methoxy cinnamate were isolated from Hedychium spicatum rhizome hexane extract and evaluated for the in vivo hair growth promotion activity on female Wistar rats weighing 120–150 g. The results found that pentadecane demonstrates good reduction in hair growth time, but hexane extract shows better-than-individual compound activity . 2.5.11. CNS Depressant Activity of Hedychium Spectrum Extract on Rats Ethanolic, hexane, and chloroform extracts were evaluated at a dose of 100 mg/kg body weight for CNS activity, and it was found that the extracts have CNS depressant activity using Gabapentin and Caffeine 250 mg/kg body weight as control . In a personal communication report by Dhawan, B.N, (Pharmacology division, CDRI, Lucknow) it was reported that the ethanolic extract of the Hedychium spicatum plant rhizome has anti-inflammatory properties . In Vitro Anti-Inflammatory and Analgesic Effect Shrotriya et al. evaluated Hedychium coronarium and successive rhizome extracts of Hexane, Chloroform, and Methanol for analgesic activity. Acetic Acid-Induced Writhing Test for Analgesic Effect This test found that chloroform and methanolic extract of 400 mg/kg bw both inhibited writhing reflux (27.23% by chloroform and 40.59% by methanolic extract) in eight groups of pre-screened Swiss albino mice with significant p < 0.001 and 50 mg/body weight of aminopyrine as control. Inhibition was measured using formula : (1) % Inhibition of writhing = 1 − W t W c × 100 Radiant Heat Tail-Flick Method for Analgesic Activity Tail-flick latency was assessed using morphine 2 mg/kg body weight as control and an analgesiometer. The percentage of elongation measures showed significant response for radiant heat, with p < 0.001 . Carrageenan-Induced Rat Hind Paw Edema Animal Model for Anti-Inflammatory Study This model was used to estimate acute inflammation. Different concentrations of hexane, chloroform, methanol extract were tested against PBZ (Phenylbutazone), with an 80 mg/kg dose and percentage inhibition calculated using the following formula: (2) % Inhibition of paw edema = 1 − V t V c × 100 where V c and V t represent paw volume. The control study found that chloroform and the methanolic extract of Hedychium coronarium rhizome extract exert significant p < 0.01 inhibition of paw volume, at 27.46% and 32.39%, respectively, with 400 mg/kg body weight; meanwhile, the control, with 80 mg/kg, showed a 42.54% inhibition with a significance value of p < 0.001 . Nitric Oxide Inhibitory Effect In Vitro Inhibitory Assay Oof NO Produced in LPS and IFN-g-Stimulated 264.7 Macrophages Labdane diterpenes, Hedychenoids A, Hedychenoids B, Hedychenone, Forrestin A, and Villosin were isolated from the rhizomes of Hedychium yunnanense by ethanol maceration, and further purified by liquid–liquid extraction using ethyl acetate and water, then butanol. Hedychenoids B and Villosin had an inhibitory effect, with IC 50 values of 6.57 ± 0.88 and 5.99 ± 01.20 µg/mL respectively . Inhibition of NO Production and iNOS Induction in LPS-Activated Mouse Peritoneal Macrophages NO is a free radical produced by oxidation of L-arginine by NO synthase (NOS). NO is involved in various processes, e.g., vasodilation, nonspecific host definition, ischemic reperfusion injury, and chronic and acute inflammation which respond to pro-inflammatory agents such as interleukin-1 β, tumour necrosis factor-α, and LPS in macrophages, endothelial cells, and smooth muscle cells. Hedychilactone A, Hedychilactone B, Hedychilctone C, Coronarin D, Coronarin D methyl ether, Coronarine E, Labda-8(17), 13(14)-dien-15,16-olide, Hedychenone, 7-Hydroxihedychenone, -Nerolidol, Hedychiol A, Hedychiol B 8,9-diacetate, and LNMMA were tested for inhibitory effects, and the study found that a concentration of 10 µM–100 µM caused significant inhibition, with p < 0.05 to p < 0.01 . shows the mechanism of NO production. Inhibition of Acetic Acid-Induced Vascular Permeability in Mice (Anti-Inflammatory) Histamine and serotonin play an important role in the vascular permeability induced by acetic acid, an exudative state of inflammation. The anti-inflammatory effects of methanolic extract of Hedychium coronarium and some labdane diterpenes (Coronarin D, Coronarin D methyl ether) were tested, and the study found that methanol extract (Dose 250–500 mg/kg), Coronarin D (Dose 25–50 mg/kg), and Coronarin D methyl ether (Dose 25–50 mg/kg) had a significance of p < 0.05– p < 0.01 . Inhibition of Released Beta-Hexosaminidase from RBL-2H3 Cells In Vitro (Antiallergic) The 12 compounds of Hedychiol A, Hedychiol B 8,9-diacetate, Hedychilactone A, B, C, Coronarin D, Coronarin D methyl ether, Coronarin E, Labda-8(17), 13(14)-dien-15,16-olide, Hedychenone, 7-hydroxyhedychenone, and -Nerolidol from Hedychium coronarium were examined for an antiallergic reaction by testing their inhibitory effect on the release of beta-Hexosaminidase from RBL-2h3 cells. The study found that the compounds have inhibitory concentrations from 10 µM to 100 µM, which were significantly different from the control, with p < 0.05 to p < 0.01 . Coronarin G, Coronarin H, Coronarin I, Coronarin D, Coronarin D methyl ether, Hedyforrestin C, (E)-nerolidol, b-sitosterol, daucosterol, and stigmasterol isolated from Hedychium coronarium were evaluated. The inhibitory effect of the compounds was tested on Lipopolysaccharide-stimulated production of pro-inflammatory cytokines in bone marrow-derived dendritic cells. The compounds Coronarin G, Coronarin H, and Hedyforrestin C were significant inhibitors of LPS-stimulated TNF-α, IL-6, and IL-12 p40 production, with IC 50 ranging from 0.19 ± 0.11 to 10.38 ± 2.34 µM, and the other compounds are cytotoxic . shows stimulation of TNF-α. Hedycoronen A, Hedycoronen B, labda-8(17),11,13-trien-16,15-olide, 16-hydroxyl-abda-8(17),11,13-trien-15,16-olide, Coronarin A, and Coronarin E were isolated from Hedychium coronarium rhizome extract with methanol, and further purification was carried out through successive liquid–liquid extractions with water, chloroform, and water and ethyl acetate, followed by chromatography over silica gel. Hedycoronen A and Hedycoronen B were found to have a potent inhibitory effect on LPS-stimulated interleukin-6 (IL-6) and IL-12 p40, with IC 50 ranging from 4.1 ± 0.2 to 9.1 ± 0.3 μM. Hedycoronen A and Hedycoronen B were found to have moderate inhibitory activity on tumor necrosis factor-α (TNF-α) production, with IC 50 values of 46.0 ± 1.3 and 12.7 ± 0.3 μM . Hedychicoronarin, Peroxycoronarin D, 7β-hydroxycalcaratarin A, (E)-7β-hydroxy-6-oxo-labda-8(17),12-diene-15,16-dial, Calcaratarin A, Coronarin A, Coronarin D, Coronarin D methyl ether, Coronarin D ethyl ether, (E)-labda-8(17),12-diene-15,16-dial, ergosta-4,6,8(14),22-tetraen-3-one, a mixture of β-sitostenone and β-stigmasta-4,22-dien-3-one, 6β-hydroxystigmast-4-en-3-one, 6β-hydroxystigmasta-4,22-dien-3-one, and a mixture of stearic acid and palmitic acid were isolated from Hedychium coronarium . The inhibitory effect of the compounds was tested on superoxide radical anion generation. Elastase release by human neutrophils was evaluated in response to fMet-Leu-Phe/cytochalasin B. Compounds 7β-hydroxycalcaratarin A and (E)-7β-hydroxy-6-oxo-labda-8(17),12-diene-15,16-dial, calcaratarin A, (E)-labda-8(17),12-diene-15,16-dial, and ergosta-4,6,8(14),22-tetraen-3-one had an inhibitory concentration of IC 50 < 6.17 µg/mL . Shrotriya et al. evaluated Hedychium coronarium and successive rhizome extracts of Hexane, Chloroform, and Methanol for analgesic activity. This test found that chloroform and methanolic extract of 400 mg/kg bw both inhibited writhing reflux (27.23% by chloroform and 40.59% by methanolic extract) in eight groups of pre-screened Swiss albino mice with significant p < 0.001 and 50 mg/body weight of aminopyrine as control. Inhibition was measured using formula : (1) % Inhibition of writhing = 1 − W t W c × 100 Tail-flick latency was assessed using morphine 2 mg/kg body weight as control and an analgesiometer. The percentage of elongation measures showed significant response for radiant heat, with p < 0.001 . This model was used to estimate acute inflammation. Different concentrations of hexane, chloroform, methanol extract were tested against PBZ (Phenylbutazone), with an 80 mg/kg dose and percentage inhibition calculated using the following formula: (2) % Inhibition of paw edema = 1 − V t V c × 100 where V c and V t represent paw volume. The control study found that chloroform and the methanolic extract of Hedychium coronarium rhizome extract exert significant p < 0.01 inhibition of paw volume, at 27.46% and 32.39%, respectively, with 400 mg/kg body weight; meanwhile, the control, with 80 mg/kg, showed a 42.54% inhibition with a significance value of p < 0.001 . In Vitro Inhibitory Assay Oof NO Produced in LPS and IFN-g-Stimulated 264.7 Macrophages Labdane diterpenes, Hedychenoids A, Hedychenoids B, Hedychenone, Forrestin A, and Villosin were isolated from the rhizomes of Hedychium yunnanense by ethanol maceration, and further purified by liquid–liquid extraction using ethyl acetate and water, then butanol. Hedychenoids B and Villosin had an inhibitory effect, with IC 50 values of 6.57 ± 0.88 and 5.99 ± 01.20 µg/mL respectively . NO is a free radical produced by oxidation of L-arginine by NO synthase (NOS). NO is involved in various processes, e.g., vasodilation, nonspecific host definition, ischemic reperfusion injury, and chronic and acute inflammation which respond to pro-inflammatory agents such as interleukin-1 β, tumour necrosis factor-α, and LPS in macrophages, endothelial cells, and smooth muscle cells. Hedychilactone A, Hedychilactone B, Hedychilctone C, Coronarin D, Coronarin D methyl ether, Coronarine E, Labda-8(17), 13(14)-dien-15,16-olide, Hedychenone, 7-Hydroxihedychenone, -Nerolidol, Hedychiol A, Hedychiol B 8,9-diacetate, and LNMMA were tested for inhibitory effects, and the study found that a concentration of 10 µM–100 µM caused significant inhibition, with p < 0.05 to p < 0.01 . shows the mechanism of NO production. Histamine and serotonin play an important role in the vascular permeability induced by acetic acid, an exudative state of inflammation. The anti-inflammatory effects of methanolic extract of Hedychium coronarium and some labdane diterpenes (Coronarin D, Coronarin D methyl ether) were tested, and the study found that methanol extract (Dose 250–500 mg/kg), Coronarin D (Dose 25–50 mg/kg), and Coronarin D methyl ether (Dose 25–50 mg/kg) had a significance of p < 0.05– p < 0.01 . The 12 compounds of Hedychiol A, Hedychiol B 8,9-diacetate, Hedychilactone A, B, C, Coronarin D, Coronarin D methyl ether, Coronarin E, Labda-8(17), 13(14)-dien-15,16-olide, Hedychenone, 7-hydroxyhedychenone, and -Nerolidol from Hedychium coronarium were examined for an antiallergic reaction by testing their inhibitory effect on the release of beta-Hexosaminidase from RBL-2h3 cells. The study found that the compounds have inhibitory concentrations from 10 µM to 100 µM, which were significantly different from the control, with p < 0.05 to p < 0.01 . Coronarin G, Coronarin H, Coronarin I, Coronarin D, Coronarin D methyl ether, Hedyforrestin C, (E)-nerolidol, b-sitosterol, daucosterol, and stigmasterol isolated from Hedychium coronarium were evaluated. The inhibitory effect of the compounds was tested on Lipopolysaccharide-stimulated production of pro-inflammatory cytokines in bone marrow-derived dendritic cells. The compounds Coronarin G, Coronarin H, and Hedyforrestin C were significant inhibitors of LPS-stimulated TNF-α, IL-6, and IL-12 p40 production, with IC 50 ranging from 0.19 ± 0.11 to 10.38 ± 2.34 µM, and the other compounds are cytotoxic . shows stimulation of TNF-α. Hedycoronen A, Hedycoronen B, labda-8(17),11,13-trien-16,15-olide, 16-hydroxyl-abda-8(17),11,13-trien-15,16-olide, Coronarin A, and Coronarin E were isolated from Hedychium coronarium rhizome extract with methanol, and further purification was carried out through successive liquid–liquid extractions with water, chloroform, and water and ethyl acetate, followed by chromatography over silica gel. Hedycoronen A and Hedycoronen B were found to have a potent inhibitory effect on LPS-stimulated interleukin-6 (IL-6) and IL-12 p40, with IC 50 ranging from 4.1 ± 0.2 to 9.1 ± 0.3 μM. Hedycoronen A and Hedycoronen B were found to have moderate inhibitory activity on tumor necrosis factor-α (TNF-α) production, with IC 50 values of 46.0 ± 1.3 and 12.7 ± 0.3 μM . Hedychicoronarin, Peroxycoronarin D, 7β-hydroxycalcaratarin A, (E)-7β-hydroxy-6-oxo-labda-8(17),12-diene-15,16-dial, Calcaratarin A, Coronarin A, Coronarin D, Coronarin D methyl ether, Coronarin D ethyl ether, (E)-labda-8(17),12-diene-15,16-dial, ergosta-4,6,8(14),22-tetraen-3-one, a mixture of β-sitostenone and β-stigmasta-4,22-dien-3-one, 6β-hydroxystigmast-4-en-3-one, 6β-hydroxystigmasta-4,22-dien-3-one, and a mixture of stearic acid and palmitic acid were isolated from Hedychium coronarium . The inhibitory effect of the compounds was tested on superoxide radical anion generation. Elastase release by human neutrophils was evaluated in response to fMet-Leu-Phe/cytochalasin B. Compounds 7β-hydroxycalcaratarin A and (E)-7β-hydroxy-6-oxo-labda-8(17),12-diene-15,16-dial, calcaratarin A, (E)-labda-8(17),12-diene-15,16-dial, and ergosta-4,6,8(14),22-tetraen-3-one had an inhibitory concentration of IC 50 < 6.17 µg/mL . Chan et al., 2008 carried out estimation of total phenolic content and free radical-scavenging activities using a Folin–Ciocalteu, and DPPH radical scavenging assay. The methanolic extract of leaves of Hedychium coronarium (from the Lake Gardens of Kuala Lumpur, Malaysia) were used for the study. The methanolic extract obtained from rhizomes of Hedychium spicatum was found to have potent antioxidant activity . The total phenolic content was estimated using the gallic acid calibration equation y = 0.0111 × 0.0148 (R 2 = 0.9998), and the total phenolic content of Hedychium coronarium was 820 ± 55 mg GAE/100 g, with significance of p < 0.05. The DPPH radical scavenging activity was calculated as IC 50 and expressed as the ascorbic acid equivalent antioxidant capacity (AEAC), at IC 50 = 0.00387 mg/mL, whereas Hedychium coronarium had a capacity of 814 ± 116 mg AA/100 g, with significance of p < 0.05. AEAC (mg AA/100 g) = IC 50(ascorbate) /IC 50(sample) ·10 5 . Essential oil from a Hedychium gardnerianum Sheppard ex Ker-Gawl leaf was evaluated for DPPH antioxidation activity, and good antioxidant activity was found against ascorbic acid and BHT, as standard . Anti-Microbial Activity and Anti-Fungal Activity by Disk Diffusion Method Aqueous, methanol, ethanol, acetone, and hexane extracts of Hedychium spicatum rhizome were evaluated against B. subtilis , S. aureus , M. luteus , E. coli , A. flavus , A. fumigatus , M. gypseum , and C. albicans, and the zone of inhibition was recorded in (mm) at different doses (mg/disc). The results found that the extracts were active against B. subtilis , M. luteus , E. coli , A. flavus , and C. albicans, and not active against S. aureus , A. fumigatus , and M. gypseum . The essential oil of Hedychium coronarium was obtained through hydro distillation, and the composition of the essential oil was identified by gas chromatography combined with mass spectroscopy with a flame ionization detector; the major components were B-pinene, eucalyptol, linalool, Coronarin-E, etc. The essential oil exhibited DPPH radical-scavenging activities, and also inhibited C. albicans and F. oxysporum . Bisht, G.S et al., 2006 carried out an anti-microbial study for petroleum ether, benzene, chloroform, ethyl acetate, acetone, ethanol, aqueous extract, and essential oil from Hedychium spicatum rhizome. Dimethyl sulfoxide (DMSO) was used as a dilunt for the extract, and Tween-20 was used as the diluent for essential oil. The following bacterial strains: Bacillus cereus G , Staphylococcus aureus (KI-1A) G , Staphylococcus aureus G , Alcaligenes faecalis G (−), Escherichia coli G (−), Escherichia coli (MTCC 1687) G (−), Klebsiella pneumoneae G (−), Pseudomonas aeruginosa (MTCC 424) G (−), Salmonella typhi G (−), Shigella dysenterae G (−) and fungal strain i.e., Alternaria saloni , Aspergillus fummigatus , Aspergilus flavus , Aspergillus niger , Candida albicans (MTCC 227), Fusarium oxysporum , Mucarracemosus , Peniciliummonotricales , Penicilium spp., Rhizopus stolonifer , Trichoderma viride , and Trichodermalignorum, were used. The study found that the concentrations of anti-microbial and anti-fungal compounds were 20 mg/disc for extract and 500 µL/disc for essential oil, respectively. Gentamycin (10 µg/disc), penicillin (10 unit/disc), Vancomycin (30 µg/disc), and Methicillin (5 µg/disc) were used as standard anti-microbial agents, and Cyclohexamide (30 µg/disc) was used as a fungicide, using a Petri dish diffusion/agar diffusion test . S. Joshi et al. carried out an antimicrobial assay using a Petri dish diffusion method, testing the rhizome essential oil (50 µL/disc) of Hedychium ellipticum , Hedychium aurantiacum , hedychium coronarium , and Hedychium spicatum and control Amikacin, Ciprofloxacin, Ampicillin, Gentamycin, and Tetracycline on bacterial strains S. aureus , Sal. Enterica , Pasteurella multocida , Shigella flexneri and Escherichia coli . The minimum inhibitory concentration of essential oil was found to range from 0.97 to 62.5 µL/mL, depending on the susceptibility of the tested organism . In Vitro Antimicrobial Activity Using Agar Diffusion/Disk Diffusion Method A Mueller–Hinton agar plate was cultured with microbial broth culture to study the zone of inhibition, using Hedychium coronarium rhizome essential oil (Contains Mono-terpenes, Di-terpenes, and sesqui-terpenes), by cylinder plate method. The plates were incubated for 24 h at 37 °C for bacteria and 24–48 h at 28 °C for fungi. The bacteria Bacillus subtilis and Pseudomonas aeruginosa and the yeast-like fungus Candida albicans and Trichoderma sp. (Dermatophyte) were studied. The study found that the essential oil of rhizomes has a better inhibitory effect against Trichoderma sp. and Candida albicans than against Bacillus subtilis and Pseudomonas aeruginosa . Aqueous, methanol, ethanol, acetone, and hexane extracts of Hedychium spicatum rhizome were evaluated against B. subtilis , S. aureus , M. luteus , E. coli , A. flavus , A. fumigatus , M. gypseum , and C. albicans, and the zone of inhibition was recorded in (mm) at different doses (mg/disc). The results found that the extracts were active against B. subtilis , M. luteus , E. coli , A. flavus , and C. albicans, and not active against S. aureus , A. fumigatus , and M. gypseum . The essential oil of Hedychium coronarium was obtained through hydro distillation, and the composition of the essential oil was identified by gas chromatography combined with mass spectroscopy with a flame ionization detector; the major components were B-pinene, eucalyptol, linalool, Coronarin-E, etc. The essential oil exhibited DPPH radical-scavenging activities, and also inhibited C. albicans and F. oxysporum . Bisht, G.S et al., 2006 carried out an anti-microbial study for petroleum ether, benzene, chloroform, ethyl acetate, acetone, ethanol, aqueous extract, and essential oil from Hedychium spicatum rhizome. Dimethyl sulfoxide (DMSO) was used as a dilunt for the extract, and Tween-20 was used as the diluent for essential oil. The following bacterial strains: Bacillus cereus G , Staphylococcus aureus (KI-1A) G , Staphylococcus aureus G , Alcaligenes faecalis G (−), Escherichia coli G (−), Escherichia coli (MTCC 1687) G (−), Klebsiella pneumoneae G (−), Pseudomonas aeruginosa (MTCC 424) G (−), Salmonella typhi G (−), Shigella dysenterae G (−) and fungal strain i.e., Alternaria saloni , Aspergillus fummigatus , Aspergilus flavus , Aspergillus niger , Candida albicans (MTCC 227), Fusarium oxysporum , Mucarracemosus , Peniciliummonotricales , Penicilium spp., Rhizopus stolonifer , Trichoderma viride , and Trichodermalignorum, were used. The study found that the concentrations of anti-microbial and anti-fungal compounds were 20 mg/disc for extract and 500 µL/disc for essential oil, respectively. Gentamycin (10 µg/disc), penicillin (10 unit/disc), Vancomycin (30 µg/disc), and Methicillin (5 µg/disc) were used as standard anti-microbial agents, and Cyclohexamide (30 µg/disc) was used as a fungicide, using a Petri dish diffusion/agar diffusion test . S. Joshi et al. carried out an antimicrobial assay using a Petri dish diffusion method, testing the rhizome essential oil (50 µL/disc) of Hedychium ellipticum , Hedychium aurantiacum , hedychium coronarium , and Hedychium spicatum and control Amikacin, Ciprofloxacin, Ampicillin, Gentamycin, and Tetracycline on bacterial strains S. aureus , Sal. Enterica , Pasteurella multocida , Shigella flexneri and Escherichia coli . The minimum inhibitory concentration of essential oil was found to range from 0.97 to 62.5 µL/mL, depending on the susceptibility of the tested organism . A Mueller–Hinton agar plate was cultured with microbial broth culture to study the zone of inhibition, using Hedychium coronarium rhizome essential oil (Contains Mono-terpenes, Di-terpenes, and sesqui-terpenes), by cylinder plate method. The plates were incubated for 24 h at 37 °C for bacteria and 24–48 h at 28 °C for fungi. The bacteria Bacillus subtilis and Pseudomonas aeruginosa and the yeast-like fungus Candida albicans and Trichoderma sp. (Dermatophyte) were studied. The study found that the essential oil of rhizomes has a better inhibitory effect against Trichoderma sp. and Candida albicans than against Bacillus subtilis and Pseudomonas aeruginosa . Aqueous, hydro-ethanolic, hydro-methanolic, and methanol extract were evaluated for activity; the study found that methanolic extract of Hedychium spicatum was as effective as Thiabendazole at 2%, 4%, and 6% concentrations . Caenorhabditis Elegans Mobility Test for Anthelmintic Activity Lima A et al., 2021 carried out anthelmintic activity using adult C. elegans , both susceptible (wild-type and Bristol N2), and Ivermectin-resistant; a balance saline solution was used in 24-well plates for treatments with nematodes. The concentration of Hedychuim coronarium rhizome essential oil and standard monoterpenes, i.e., α-Pinene, β-Pinene, (S)-(−) Limonene, (R)- Limonene 1,8-Cineole, and p -Cymene, was found to range from 0.009 to 10 mg/mL, diluted in DMSO 1%. After 24 h at 24° C, mortality was evaluated with a negative control mixture of M-9 solution and Ivermectin as positive control. The study found that the essential oil of Hedychium coronarium rhizomes had an IC 50 concentration of 0.082 mg/mL (IC 95 concentration 0.058–0.117 mg/mL) for the Bristol N2 strain, and an inhibitory concentration of IC 50 of 0.82 mg/mL (IC 95 0.556–1.2 mg/mL) for the Ivermectin-resistant strain, with a significance value of p < 0.05 . Lima A et al., 2021 carried out anthelmintic activity using adult C. elegans , both susceptible (wild-type and Bristol N2), and Ivermectin-resistant; a balance saline solution was used in 24-well plates for treatments with nematodes. The concentration of Hedychuim coronarium rhizome essential oil and standard monoterpenes, i.e., α-Pinene, β-Pinene, (S)-(−) Limonene, (R)- Limonene 1,8-Cineole, and p -Cymene, was found to range from 0.009 to 10 mg/mL, diluted in DMSO 1%. After 24 h at 24° C, mortality was evaluated with a negative control mixture of M-9 solution and Ivermectin as positive control. The study found that the essential oil of Hedychium coronarium rhizomes had an IC 50 concentration of 0.082 mg/mL (IC 95 concentration 0.058–0.117 mg/mL) for the Bristol N2 strain, and an inhibitory concentration of IC 50 of 0.82 mg/mL (IC 95 0.556–1.2 mg/mL) for the Ivermectin-resistant strain, with a significance value of p < 0.05 . An in vitro study of hydroalcoholic extract of root composition containing Hedychium spicatum root was carried out to examine its antihistaminic effect (with Histamine dihydrochloride as control) on Guinea pigs, with a composition of 50 mg/kg of Hydroalcoholic extract. It effectively works as a preventive-type antagonist. When investigating mast cell stabilization (Ketotifen fumarate as control) on rats, a 1000 µg/mL concentration composition produced 54–58% inhibition of mast cell degranulation, with a significance value of p < 0.001. The bronchodilatory effect of the extract composition on histamine-induced bronchospasm (80–86%) was investigated in guinea pigs; a 200 mg/kg–500 mg/kg composition increased pre-convulsion time by 27–36%, with a significance value of p < 0.001 . Sesquiterpenes isolated from Hedychium spicatum (Eudesma-4(15)-ene-β-11diol, crytomeridiol, β-Udesmol, 3-Hydroxy-β-eudesmol, Mucrolidin, Oplapanone, α-Terpineol, Elemol, Dehydrocarissone, Δ7-β-Eudesmol, Opladiol, Hydroxycryptomeridiol, β-Caryophyllene oxide, Coniferaldehyde and Ethylferulate) were examined through inhibitory their effects against A-549, B-16, Hela, HT-29, NCI-H460, PC-3, IEC-6 and L-6 cancer cell lines. The results found that the compounds had potent cytotoxic activity, with an IC 50 value of 0.3 μg/mL and 1.80 μg/mL . Cytotoxic Screening Test A chloroform extract of rhizome of Hedychium coronarium eluted over silica gel column chromatography and seven fractions were evaluated for cytotoxicity, which was tested by a total cell packed volume method using Sarcoma 180 ascites in mice. (E)-λ-8(17),12-di Coronarin A (IC 50 = 1.65), Ene-15,16-dial (IC 50 = 18.5), Coronarin B (IC 50 = 2.70), Coronarin C (IC 50 = 17.5), and Coronarin D (IC 50 = 17.0), were also tested by an inhibition of colony formation method using Chinese hamsters. V-79 cells’ cytotoxicity was determined by T(the number of stained colonies of test groups)/C(the dose of the control group × 100 values, or the IC 50 drug concentration that inhibits colony growth by 50%) . In Vitro Cytotoxicity Assay Labdane diterpenes, Hedychenoids A, Hedychenoids B, Hedychenone, Forrestin A, and Villosin were isolated from the rhizomes of Hedychium yunnanense by ethanol maceration, and further purified by liquid–liquid extraction using Ethyl acetate and water, then Butanol. The compounds were tested using an SRB method on SGC-7901 (the human gastric cancer cell line) and HELA (human cervical carcinoma). The study found that Hedychenoids B, Hedychenone, and Villosin had cytotoxicity against SGC-7901, with IC 50 values of 14.88 ± 0.52, 7.08 ± 0.21 and 7.76 ± 0.21 µg/mL, and against HELA, with IC 50 values of 9.76 ± 0.48 and 13.24 ± 0.63 µg/mL, respectively . In Vitro Cytotoxic Study of Hedychenone and Its Analogues MCF-7 (breast cancer), HL-60 (human promyelocytic leukemia), CHO (Chinese hamster ovary), A-375 (Human malignant melanoma), and A-549 (Human lung carcinoma) cell lines were studied . Hedyforrestin D, 15-Ethoxy-hedyforrestin D, Yunnacoronarin A, Yunnacoronarin B and Yunnacoronarin C were tested for cytotoxicity against the lung adenocarcinoma cell A549, and leukemia cells K562 through an MTT assay. The study found that Yunnacoronarin A and Yunnacoronarin B have good activity, with IC 50 values of 0.92 and 2.2 µM. The unsaturated lactone group had an important role in the anti-tumor activity against human lung adenocarcinoma . Hexane extract of Hedychium coronarium -derived compounds 6-oxo-7,11,13-labdatrien-17-al-16,15-olide, 7,17-dihy-droxy-6-oxo-7,11,13-labdatrien-16,15-olide, Coronarin D , 7 Coronarin C, 7 Coronarin D methyl ether,15-Cryptomeridiol,16-Hedychenone,13, 6-oxo-7,11,13-labdatri-ene-16,15-Olide,12-pacovatinin A,17,4-Hydroxy-3-methoxy cinnamaldehyde, 18, and 4-Hydroxy-3-methoxy ethyl cinnamate were tested against A-549 (lung cancer), SK-N-SH (human neuroblastoma), MCF-7 (breast cancer) and HELA (cervical cancer) cell lines, showing moderate cytotoxic activity , and antineoplastic activity against brain cancer. The antiproliferative activity of Coronarin D against Glioblastoma cell line U–251 was reported . A chloroform extract of rhizome of Hedychium coronarium eluted over silica gel column chromatography and seven fractions were evaluated for cytotoxicity, which was tested by a total cell packed volume method using Sarcoma 180 ascites in mice. (E)-λ-8(17),12-di Coronarin A (IC 50 = 1.65), Ene-15,16-dial (IC 50 = 18.5), Coronarin B (IC 50 = 2.70), Coronarin C (IC 50 = 17.5), and Coronarin D (IC 50 = 17.0), were also tested by an inhibition of colony formation method using Chinese hamsters. V-79 cells’ cytotoxicity was determined by T(the number of stained colonies of test groups)/C(the dose of the control group × 100 values, or the IC 50 drug concentration that inhibits colony growth by 50%) . Labdane diterpenes, Hedychenoids A, Hedychenoids B, Hedychenone, Forrestin A, and Villosin were isolated from the rhizomes of Hedychium yunnanense by ethanol maceration, and further purified by liquid–liquid extraction using Ethyl acetate and water, then Butanol. The compounds were tested using an SRB method on SGC-7901 (the human gastric cancer cell line) and HELA (human cervical carcinoma). The study found that Hedychenoids B, Hedychenone, and Villosin had cytotoxicity against SGC-7901, with IC 50 values of 14.88 ± 0.52, 7.08 ± 0.21 and 7.76 ± 0.21 µg/mL, and against HELA, with IC 50 values of 9.76 ± 0.48 and 13.24 ± 0.63 µg/mL, respectively . MCF-7 (breast cancer), HL-60 (human promyelocytic leukemia), CHO (Chinese hamster ovary), A-375 (Human malignant melanoma), and A-549 (Human lung carcinoma) cell lines were studied . Hedyforrestin D, 15-Ethoxy-hedyforrestin D, Yunnacoronarin A, Yunnacoronarin B and Yunnacoronarin C were tested for cytotoxicity against the lung adenocarcinoma cell A549, and leukemia cells K562 through an MTT assay. The study found that Yunnacoronarin A and Yunnacoronarin B have good activity, with IC 50 values of 0.92 and 2.2 µM. The unsaturated lactone group had an important role in the anti-tumor activity against human lung adenocarcinoma . Hexane extract of Hedychium coronarium -derived compounds 6-oxo-7,11,13-labdatrien-17-al-16,15-olide, 7,17-dihy-droxy-6-oxo-7,11,13-labdatrien-16,15-olide, Coronarin D , 7 Coronarin C, 7 Coronarin D methyl ether,15-Cryptomeridiol,16-Hedychenone,13, 6-oxo-7,11,13-labdatri-ene-16,15-Olide,12-pacovatinin A,17,4-Hydroxy-3-methoxy cinnamaldehyde, 18, and 4-Hydroxy-3-methoxy ethyl cinnamate were tested against A-549 (lung cancer), SK-N-SH (human neuroblastoma), MCF-7 (breast cancer) and HELA (cervical cancer) cell lines, showing moderate cytotoxic activity , and antineoplastic activity against brain cancer. The antiproliferative activity of Coronarin D against Glioblastoma cell line U–251 was reported . The protective effect of Hedychium spicatum rhizome powder with concentrations of 4000 ppm and 2000 ppm was tested against 250 ppm IC 50 of Indoxacarb-induced toxicity on a group of cockerels. The ameliorative effect of Hedychium spicatum root powder and its ability to restore the gene activities and expression of antioxidant, biotransformation, and immune system genes were demonstrated in cockerels fed Indoxacarb . Hepatoprotective Effect on D-GalN-Induced Cytotoxicity in Primary Cultured Mouse Hepatocytes S. Nakamura et al. carried out a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) colorimetric assay in primary cultured mouse hepatocytes. Hepatocytes were isolated by the collagen perfusion method. Some 80% Aqueous extract, Coronarins B, Coronarins C, Coronarins D, 15-Hydroxylabda-8(17), 11,13-trien-16,15-olide, 16-Formyllabda-8(17),12-dien-15,11-olide, Ferullic acid, and Silybin were tested. During the test, Formazone was produced. The optical density of Formazone solution at 562 nm (reference: 660 nm) was measured by microplate reader. The percentage inhibition was calculated by the following formula: [OD sample−OD control/OD normal−OD control] × 100. The study concluded that 80% Aqueous acetone extract of Hedychium coronarium flower and other chemical constituents had a hepatoprotective effect, with a significant p < 0.01 value of percentage inhibition . The expression of hepatic genes associated with biotransformation, antioxidant, and immune systems in WLH cockerels fed indoxacarb was evaluated, and a protective effect of Hedychium spicatum root extract was found. The extract prevents changes in expression of antioxidant, biotransformation, and immune system genes . shows inhibition of D-GalN-induced cytotoxicity. Compounds showing hepatoprotective effect through inhibition of D-GalN-induced cytotoxicity: Coronarins B, Coronarins C, Coronarins D, 15-Hydroxylabda-8(17), 11,13-trien-16,15-olide, 16-Formyllabda-8(17),12-dien-15,11-olide, and Ferullic acid. Compounds showing anti-inflammatory effect through inhibition of lPs-induced NO production: Hedychenoids A, Hedychenoids B, Hedychenone, Forrestin A, Villosin, Hedychilactone A, Hedychilactone B, Hedychilctone C, Coronarin D, Coronarin D methyl ether, Coronarine E, Labda-8(17), 13(14)-dien-15,16-olide, Hedychenone, 7-Hydroxihedychenone, -Nerolidol, Hedychiol A, and Hedychiol B 8,9-diacetate. Compounds showing anti-allergic effect through inhibition of TNF-α-induced cytotoxicity: Coronarin G, Coronarin H, Coronarin I, Coronarin D, Coronarin D methyl ether, Hedyforrestin C, (E)-Nerolidol, β-Sitosterol, Daucosterol, and Stigmasterol. S. Nakamura et al. carried out a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) colorimetric assay in primary cultured mouse hepatocytes. Hepatocytes were isolated by the collagen perfusion method. Some 80% Aqueous extract, Coronarins B, Coronarins C, Coronarins D, 15-Hydroxylabda-8(17), 11,13-trien-16,15-olide, 16-Formyllabda-8(17),12-dien-15,11-olide, Ferullic acid, and Silybin were tested. During the test, Formazone was produced. The optical density of Formazone solution at 562 nm (reference: 660 nm) was measured by microplate reader. The percentage inhibition was calculated by the following formula: [OD sample−OD control/OD normal−OD control] × 100. The study concluded that 80% Aqueous acetone extract of Hedychium coronarium flower and other chemical constituents had a hepatoprotective effect, with a significant p < 0.01 value of percentage inhibition . The expression of hepatic genes associated with biotransformation, antioxidant, and immune systems in WLH cockerels fed indoxacarb was evaluated, and a protective effect of Hedychium spicatum root extract was found. The extract prevents changes in expression of antioxidant, biotransformation, and immune system genes . shows inhibition of D-GalN-induced cytotoxicity. Compounds showing hepatoprotective effect through inhibition of D-GalN-induced cytotoxicity: Coronarins B, Coronarins C, Coronarins D, 15-Hydroxylabda-8(17), 11,13-trien-16,15-olide, 16-Formyllabda-8(17),12-dien-15,11-olide, and Ferullic acid. Compounds showing anti-inflammatory effect through inhibition of lPs-induced NO production: Hedychenoids A, Hedychenoids B, Hedychenone, Forrestin A, Villosin, Hedychilactone A, Hedychilactone B, Hedychilctone C, Coronarin D, Coronarin D methyl ether, Coronarine E, Labda-8(17), 13(14)-dien-15,16-olide, Hedychenone, 7-Hydroxihedychenone, -Nerolidol, Hedychiol A, and Hedychiol B 8,9-diacetate. Compounds showing anti-allergic effect through inhibition of TNF-α-induced cytotoxicity: Coronarin G, Coronarin H, Coronarin I, Coronarin D, Coronarin D methyl ether, Hedyforrestin C, (E)-Nerolidol, β-Sitosterol, Daucosterol, and Stigmasterol. V. Jadhav et al. carried out an in vitro pediculicidal activity. The hydro distillate essential oil of Hedychium spicatum rhizomes has been tested on P.humanuscapitis (Phthiraptera: Pediculidae). Essential oil of 1, 2 and 5% concentration was blended with coconut oil as a base. A 1% permethrin-based preparation was used as a positive control. Lice clasping hair strands were immersed completely in the test solutions and the marketed preparation for 1 min. Vital signs were measured after 5, 10, 15, 20, 30, 45, 60, 90, and 120-min; lice were judged to be dead if a vital sign was zero. Mortality was observed as 85%, 80%, and 75% for the 5%, 2%, and 1% Hedychium spicatum essential oil; after 2 h, 100% mortality was observed, which is as significant as 1% permethrin preparation . Pentadecane and Ethyl para methoxy cinnamate were isolated from Hedychium spicatum rhizome hexane extract and evaluated for the in vivo hair growth promotion activity on female Wistar rats weighing 120–150 g. The results found that pentadecane demonstrates good reduction in hair growth time, but hexane extract shows better-than-individual compound activity . Ethanolic, hexane, and chloroform extracts were evaluated at a dose of 100 mg/kg body weight for CNS activity, and it was found that the extracts have CNS depressant activity using Gabapentin and Caffeine 250 mg/kg body weight as control . 2.6.1. Antimicrobial Composition Hydro alcoholic distillate of Hedychium spicatum is used with other plants’ distillates as an antimicrobial composition in Japan . 2.6.2. Skin Protective Composition A composition of Hedychium extract has been reported to treat environmental damage to the skin, regulating firmness, tone, wrinkles, and skin texture with a cosmetically acceptable carrier. Inhibition of UV-Induced Matrix Metalloproteinase-(MMP) Epidermal equivalents derived from human epidermal keratinocytes topically treated with 0% or 0.5% w / w of Hedychium spicatum were irradiated with solar spectrum light and analysed with ELISA, which proved that compositions containing Hedychium spicatum extract provide protection against UV-induced matrix mettalloproteinase-1 (MMP-1) (the UV light in 15 MED’s effect on MMP-1: 19.3 pg/mL was reduced to 11.2 pg/mL with 0% Hedychium extract, and on MMP-1: 31.2 was reduced to 2.1 pg/mL composition with 0.5% Hedychium extract) . Prevention of Smoke-Induced Loss of Thiols in Normal Human Dermal Fibroblasts Glutathione works as a redox buffer by maintaining a balance of oxidants and antioxidants. UV exposure depletes antioxidants and glutathione, which leads to higher UVR sensitivity that causes wrinkling on the skin and environmental damage. Ten minutes of exposure to smoke reduces the thiols percentage, hence a 100 µg/mL Hedychium spicatum extract concentration afforded thiols protection in 106 ± 15.8 (mean ± SD) smoke groups . Inhibition of Nitric Oxide Production The ability of Hedychium extract to inhibit nitric oxide production was tested in LPS-stimulate murine macrophages. Nitric oxide is involved in physiological processes such as vasodilation, neurotransmission, inflammation, and growth of cancers. Nitric oxide combined with one superoxide radical produces peroxy nitrite, a highly toxic free radical. Murine macrophage RAW 264.7 was treated with Hedychium extract with concentrations of 10 to 200 µg/mL, and lipopolysaccharide from E. coli . with an IC 50 of 69.97 µg/mL . 2.6.3. Compositions Used to Darkenthe Skin The composition of Hedychium extract, some peptides and other extracts were tested for their ability to darken to skin and have been studied in in vitro and in vivo on human skin. Induced Pigmentation in Human Cell Culture Keratinocyte-melanocyte culture (Human HaCaT keratinocytes) was used to test different peptide (at 50 µM), pigment melanin and derivatives of melanin (0.0001% w / v t 1% w / v ), and plant extract Forskolin was used as a pigmentation inducer. L-3,4-Dihydroxyphenylalanine (DOPA) staining and computerized image analysis were carried out (the parameters measured were the surface area of stained material within the melanocytes and keratinocytes, the total surface area of cells in culture, and the related pigmented area). Cell viability was assayed using Alamar Blue TM , and the increased pigmentation level of a peptide + Hedychium extract (50 µM, 0.1% w / v ) composition was tested during analysis; the mean related pigmented area was 0.050, and the control mean area was 0.150 . Induced Pigmentation In Vivo Dark-skinned Yucatan micro swine were used for analysis of the increase in pigment deposition Forskolin or Coleus extract. Versus a positive control (1% w / v ); pigment increased from 1% to 5% w / v , and peptide from 250 µM to 500 µM. Hedychium spicatum extract dissolved in ethanol: propylene glycol at a ratio of 70:30 v / v indicated a strong increase in pigment deposition, and some increase in caps . Darkening of Human Skin Human skin was tested (after being obtained from patients undergoing cosmetic surgery). Human graft was treated with composition of peptide (500 µM) and soluble melanin Melasyn-100 tm (1% w / v ) in Lysosome (20 mg/mL) as a control; histological sections were evaluated for changes in pigment deposition and capped epidermal cells above the basal layer . The effect of ethanol extract of leaves and pseudo stem of Hedychium coronarium on melanogenesis in B16 cells was evaluated (through melanin titration); stimulation of melanin release was inhibited by the extract at 1 mg/mL concentration. This indicates that it may help to inhibit sun-induced pigmentary spots . 2.6.4. Synergistic Antipyretic Formulation A formulation containing Berberis aristata (15%), Tinosporacordifoia (15%), Alstoniascholaris (10%), Andrographis paniculata (10%), Hedychium spicatum (15%), preservative/sodium benzoate (0.001%), and simple syrup QS (to make the volume up to 100%) was tested in Dengue and yeast-induced pyrexia in rats. At 1 h, their raised temperature was reduced, and the significance was high ( P < (0.01). The experiment was controlled with Paracetamol 150 mg . 2.6.5. Altering the Perception of Malodor Composition containing Frankincense, Benzyl benzoate, ldehyde mixture, Amyl salicylate essential oil of Hedychium spicatum , Vanillin, Rose oil, Rose oil absolute, Ylang Ylang oil, Mexican pepper leaf oil, lignaloe wood oil . 2.6.6. Composition Containing Active Sunscreen Agent Hexane and ethyl acetate fractions were isolated over silica gel and 60–120 mesh containing p-Methoxy cinnamic acid esters from Hedychium spicatum extract. Formulations containing Hedychium spicatum extract were analyzed for SPF 13.97 using an SPF 290S analyzer; the formulations were sun protection cream, sun protection shampoo, and sun protection gel, all containing Cinnamic acid ester active fractions. Skin irritation was tested in Guinea pigs, and localized reversible dermal responses were checked without the involvement of immune response . 2.6.7. Composition Treating Tenia Infection Hedychium spicatum extract prepared by pulverized rhizome was extracted with chloroform at 60 °C. 6.5% w / w . Ethyl-p-methoxycinnamate was extracted and isolated by silica gel column chromatography using pet ether in ethyl acetate. Final crystallization was carried out using pet ether. An M.P of 48–50 °C was characterized by 1 H NMR and MS 206 (M+) . 2.6.8. Anti-TNF Alpha Activity of Hedychium Spicatum Extract and Lead Molecule TNF-α production was assayed by Lipopolysaccharide (LPS) in human peripheral blood mononuclear cells. (hPBMCs). Extract of Hedychium spicatum inhibited TNF-α production by 5–96% at a concentration of 10 µg/mL to 100 µg/mL, while Ethyl-p-methoxycinnamate inhibited TNF-α by 0–80% at a concentration of 10 µg/mL to 100 µg/mL in human polymorph nuclear cells . 2.6.9. Ointment Cream Formulation and Its In vitro Anti-Dermatophytic Activity A 10% Hedychium spicatum extract was used with other formulation ingredients w / w . Trichophyton mentagrophytes and Microsporumgypseum culture grown on a PDA agar slant at 28 °C. Extract of Hedychium spicatum (0.01 to 0.10 mg/mL), Ethyl-p-methoxycinnamate (0.01 to 0.10 mg/mL), and ointment (100, 50, 25, 10, and 5 mg/mL) were assayed against control Ketoconazole 2% in 0.5 mg/mL and Tolftate 1% in 0.05 mg/mL concentration as standard. The minimum inhibitory concentration (MIC) was found for the extract (0.04 mg/mL), and for Ethyl-p-methoxycinnamate (0.03 mg/mL), Ketoconazole 2% (0.5 mg/mL), and Tolnaflate 1% (0.05 mg/mL) . 2.6.10. Antidiabetic Activity of Extract and Composition Used to Reduce Blood Glucose Level Ethanol extract of the leaves and pseudo stem of Hedychium coronarium was tested for glucose tolerance in normal rats (with a dose of 750 mg/kg reducing blood glucose in 120 min) and mice with type-II diabetes (with a dose of 1.5 g/kg causing a significant reduction in blood glucose, with a significance value of p < 0.01). An intraperitoneal glucose tolerance test was carried out in normal mice (with a dose of 1.5 g/kg, with p < 0.01), while an insulin increase test was carried out in normal mice (with a dose of 1.5 g/kg, with p < 0.01 value). A glucose tolerance test was carried out in rats with type-I diabetes (with a dose of 1.5 g/kg, with p < 0.01), and an insulin increase test was performed on mice with type-II diabetes (with a dose of 1 g/kg, with p < 0.01). An insulin resistance test was carried out in rats with type-I diabetes (with a dose of 1 g/kg, with p < 0.01) . The water extract of leaves and the pseudo-stem of Hedychium coronarium were tested for glucose tolerance in normal rats (with a dose of 1.5 g/kg, with p < 0.01), and water–ethanol extract was tested for glucose tolerance in normal rats (with a dose of 0.8 g/kg, with p < 0.01) . 2.6.11. Anti-Inflammatory Composition in Cream Inflammatory cytokines (TNF-alpha, IL-6, and IL-1beta in pcg/mg protein) were tested in the blood serum of mice by an enzyme-linked immune-sorbent assay (ELISA), which showed the topical synergistic anti-inflammatory activity of three essential oil blends, including 60% Cymbopogon citratus oil, 20% Zenthoxylumarmatum oil, and 20% Hedychium spicatum oil, which is beneficial in inflammatory arthritis . The macrobiotic composition of Hedychium spicatum extract, along with other ingredients, shows health benefits that are effective in facial and body care for acne, dermatitis, eczema , and conditioning of the skin . Various extract- and essential oil-based formulations have been produced for the treatment of immune system disorders such as cancer and lupus . 2.6.12. Uses in Cracked Heels A cracked heel cream composition containing Hedychium spicatum extract may be effective as an anti-inflammatory and used as barrier to protect the skin . 2.6.13. Therapeutic Effect of Composition of Plant Extract of Hedychium Coronarium Root for Treatment of the Human Body A mitochondrial network in fibroblasts was irradiated with UVA (Mito-tracker staining, ATP, and NAD+/NADH titration). Cytotoxicity and viability were checked by an XTT assay. Concentrations of 0.005% and 0.01% extract were non-cytotoxic to fibroblasts . The lysosomal network in fibroblasts was irradiated with UVA (Lysotracker staining). Cytotoxicity and viability were checked by an XTT assay. Concentrations of 0.0005% and 0.01% extract (50% hydro-alcoholic root extract) were non-cytotoxic to fibroblasts . Evaluation of the effect of the root extract on human skin explants was then performed using a pollution model. The extract has the potential to fight against inflammatory stress mediators induced by pollution . The effect of β-endorphin production by normal human keratinocytes was evaluated; treatment of normal keratinocytes with extract concentrations of 0.001 to 0.005% induced stimulation of β-endorphin release . The antioxidant potential of the ethanol extract of the leaves and pseudo stem of Hedychium coronarium was evaluated in normal human keratinocytes (using ROS detection with an H2DCFDA probe). The extract showed inhibitory action on ROS production with a concentration of 0.01%, p < 0.122 . The effect of the ethanol extract of leaves and the pseudo stem of Hedychium coronarium on autophagic activity in fibroblasts was evaluated by irradiation with blue light (according to MDC assay, the autophagic activity of fibroblast cells decreases by 13–35% at concentrations of 0.001% to 0.005%) . Hydro alcoholic distillate of Hedychium spicatum is used with other plants’ distillates as an antimicrobial composition in Japan . A composition of Hedychium extract has been reported to treat environmental damage to the skin, regulating firmness, tone, wrinkles, and skin texture with a cosmetically acceptable carrier. Inhibition of UV-Induced Matrix Metalloproteinase-(MMP) Epidermal equivalents derived from human epidermal keratinocytes topically treated with 0% or 0.5% w / w of Hedychium spicatum were irradiated with solar spectrum light and analysed with ELISA, which proved that compositions containing Hedychium spicatum extract provide protection against UV-induced matrix mettalloproteinase-1 (MMP-1) (the UV light in 15 MED’s effect on MMP-1: 19.3 pg/mL was reduced to 11.2 pg/mL with 0% Hedychium extract, and on MMP-1: 31.2 was reduced to 2.1 pg/mL composition with 0.5% Hedychium extract) . Prevention of Smoke-Induced Loss of Thiols in Normal Human Dermal Fibroblasts Glutathione works as a redox buffer by maintaining a balance of oxidants and antioxidants. UV exposure depletes antioxidants and glutathione, which leads to higher UVR sensitivity that causes wrinkling on the skin and environmental damage. Ten minutes of exposure to smoke reduces the thiols percentage, hence a 100 µg/mL Hedychium spicatum extract concentration afforded thiols protection in 106 ± 15.8 (mean ± SD) smoke groups . Inhibition of Nitric Oxide Production The ability of Hedychium extract to inhibit nitric oxide production was tested in LPS-stimulate murine macrophages. Nitric oxide is involved in physiological processes such as vasodilation, neurotransmission, inflammation, and growth of cancers. Nitric oxide combined with one superoxide radical produces peroxy nitrite, a highly toxic free radical. Murine macrophage RAW 264.7 was treated with Hedychium extract with concentrations of 10 to 200 µg/mL, and lipopolysaccharide from E. coli . with an IC 50 of 69.97 µg/mL . Epidermal equivalents derived from human epidermal keratinocytes topically treated with 0% or 0.5% w / w of Hedychium spicatum were irradiated with solar spectrum light and analysed with ELISA, which proved that compositions containing Hedychium spicatum extract provide protection against UV-induced matrix mettalloproteinase-1 (MMP-1) (the UV light in 15 MED’s effect on MMP-1: 19.3 pg/mL was reduced to 11.2 pg/mL with 0% Hedychium extract, and on MMP-1: 31.2 was reduced to 2.1 pg/mL composition with 0.5% Hedychium extract) . Glutathione works as a redox buffer by maintaining a balance of oxidants and antioxidants. UV exposure depletes antioxidants and glutathione, which leads to higher UVR sensitivity that causes wrinkling on the skin and environmental damage. Ten minutes of exposure to smoke reduces the thiols percentage, hence a 100 µg/mL Hedychium spicatum extract concentration afforded thiols protection in 106 ± 15.8 (mean ± SD) smoke groups . The ability of Hedychium extract to inhibit nitric oxide production was tested in LPS-stimulate murine macrophages. Nitric oxide is involved in physiological processes such as vasodilation, neurotransmission, inflammation, and growth of cancers. Nitric oxide combined with one superoxide radical produces peroxy nitrite, a highly toxic free radical. Murine macrophage RAW 264.7 was treated with Hedychium extract with concentrations of 10 to 200 µg/mL, and lipopolysaccharide from E. coli . with an IC 50 of 69.97 µg/mL . The composition of Hedychium extract, some peptides and other extracts were tested for their ability to darken to skin and have been studied in in vitro and in vivo on human skin. Induced Pigmentation in Human Cell Culture Keratinocyte-melanocyte culture (Human HaCaT keratinocytes) was used to test different peptide (at 50 µM), pigment melanin and derivatives of melanin (0.0001% w / v t 1% w / v ), and plant extract Forskolin was used as a pigmentation inducer. L-3,4-Dihydroxyphenylalanine (DOPA) staining and computerized image analysis were carried out (the parameters measured were the surface area of stained material within the melanocytes and keratinocytes, the total surface area of cells in culture, and the related pigmented area). Cell viability was assayed using Alamar Blue TM , and the increased pigmentation level of a peptide + Hedychium extract (50 µM, 0.1% w / v ) composition was tested during analysis; the mean related pigmented area was 0.050, and the control mean area was 0.150 . Induced Pigmentation In Vivo Dark-skinned Yucatan micro swine were used for analysis of the increase in pigment deposition Forskolin or Coleus extract. Versus a positive control (1% w / v ); pigment increased from 1% to 5% w / v , and peptide from 250 µM to 500 µM. Hedychium spicatum extract dissolved in ethanol: propylene glycol at a ratio of 70:30 v / v indicated a strong increase in pigment deposition, and some increase in caps . Darkening of Human Skin Human skin was tested (after being obtained from patients undergoing cosmetic surgery). Human graft was treated with composition of peptide (500 µM) and soluble melanin Melasyn-100 tm (1% w / v ) in Lysosome (20 mg/mL) as a control; histological sections were evaluated for changes in pigment deposition and capped epidermal cells above the basal layer . The effect of ethanol extract of leaves and pseudo stem of Hedychium coronarium on melanogenesis in B16 cells was evaluated (through melanin titration); stimulation of melanin release was inhibited by the extract at 1 mg/mL concentration. This indicates that it may help to inhibit sun-induced pigmentary spots . Keratinocyte-melanocyte culture (Human HaCaT keratinocytes) was used to test different peptide (at 50 µM), pigment melanin and derivatives of melanin (0.0001% w / v t 1% w / v ), and plant extract Forskolin was used as a pigmentation inducer. L-3,4-Dihydroxyphenylalanine (DOPA) staining and computerized image analysis were carried out (the parameters measured were the surface area of stained material within the melanocytes and keratinocytes, the total surface area of cells in culture, and the related pigmented area). Cell viability was assayed using Alamar Blue TM , and the increased pigmentation level of a peptide + Hedychium extract (50 µM, 0.1% w / v ) composition was tested during analysis; the mean related pigmented area was 0.050, and the control mean area was 0.150 . Dark-skinned Yucatan micro swine were used for analysis of the increase in pigment deposition Forskolin or Coleus extract. Versus a positive control (1% w / v ); pigment increased from 1% to 5% w / v , and peptide from 250 µM to 500 µM. Hedychium spicatum extract dissolved in ethanol: propylene glycol at a ratio of 70:30 v / v indicated a strong increase in pigment deposition, and some increase in caps . Human skin was tested (after being obtained from patients undergoing cosmetic surgery). Human graft was treated with composition of peptide (500 µM) and soluble melanin Melasyn-100 tm (1% w / v ) in Lysosome (20 mg/mL) as a control; histological sections were evaluated for changes in pigment deposition and capped epidermal cells above the basal layer . The effect of ethanol extract of leaves and pseudo stem of Hedychium coronarium on melanogenesis in B16 cells was evaluated (through melanin titration); stimulation of melanin release was inhibited by the extract at 1 mg/mL concentration. This indicates that it may help to inhibit sun-induced pigmentary spots . A formulation containing Berberis aristata (15%), Tinosporacordifoia (15%), Alstoniascholaris (10%), Andrographis paniculata (10%), Hedychium spicatum (15%), preservative/sodium benzoate (0.001%), and simple syrup QS (to make the volume up to 100%) was tested in Dengue and yeast-induced pyrexia in rats. At 1 h, their raised temperature was reduced, and the significance was high ( P < (0.01). The experiment was controlled with Paracetamol 150 mg . Composition containing Frankincense, Benzyl benzoate, ldehyde mixture, Amyl salicylate essential oil of Hedychium spicatum , Vanillin, Rose oil, Rose oil absolute, Ylang Ylang oil, Mexican pepper leaf oil, lignaloe wood oil . Hexane and ethyl acetate fractions were isolated over silica gel and 60–120 mesh containing p-Methoxy cinnamic acid esters from Hedychium spicatum extract. Formulations containing Hedychium spicatum extract were analyzed for SPF 13.97 using an SPF 290S analyzer; the formulations were sun protection cream, sun protection shampoo, and sun protection gel, all containing Cinnamic acid ester active fractions. Skin irritation was tested in Guinea pigs, and localized reversible dermal responses were checked without the involvement of immune response . Hedychium spicatum extract prepared by pulverized rhizome was extracted with chloroform at 60 °C. 6.5% w / w . Ethyl-p-methoxycinnamate was extracted and isolated by silica gel column chromatography using pet ether in ethyl acetate. Final crystallization was carried out using pet ether. An M.P of 48–50 °C was characterized by 1 H NMR and MS 206 (M+) . TNF-α production was assayed by Lipopolysaccharide (LPS) in human peripheral blood mononuclear cells. (hPBMCs). Extract of Hedychium spicatum inhibited TNF-α production by 5–96% at a concentration of 10 µg/mL to 100 µg/mL, while Ethyl-p-methoxycinnamate inhibited TNF-α by 0–80% at a concentration of 10 µg/mL to 100 µg/mL in human polymorph nuclear cells . A 10% Hedychium spicatum extract was used with other formulation ingredients w / w . Trichophyton mentagrophytes and Microsporumgypseum culture grown on a PDA agar slant at 28 °C. Extract of Hedychium spicatum (0.01 to 0.10 mg/mL), Ethyl-p-methoxycinnamate (0.01 to 0.10 mg/mL), and ointment (100, 50, 25, 10, and 5 mg/mL) were assayed against control Ketoconazole 2% in 0.5 mg/mL and Tolftate 1% in 0.05 mg/mL concentration as standard. The minimum inhibitory concentration (MIC) was found for the extract (0.04 mg/mL), and for Ethyl-p-methoxycinnamate (0.03 mg/mL), Ketoconazole 2% (0.5 mg/mL), and Tolnaflate 1% (0.05 mg/mL) . Ethanol extract of the leaves and pseudo stem of Hedychium coronarium was tested for glucose tolerance in normal rats (with a dose of 750 mg/kg reducing blood glucose in 120 min) and mice with type-II diabetes (with a dose of 1.5 g/kg causing a significant reduction in blood glucose, with a significance value of p < 0.01). An intraperitoneal glucose tolerance test was carried out in normal mice (with a dose of 1.5 g/kg, with p < 0.01), while an insulin increase test was carried out in normal mice (with a dose of 1.5 g/kg, with p < 0.01 value). A glucose tolerance test was carried out in rats with type-I diabetes (with a dose of 1.5 g/kg, with p < 0.01), and an insulin increase test was performed on mice with type-II diabetes (with a dose of 1 g/kg, with p < 0.01). An insulin resistance test was carried out in rats with type-I diabetes (with a dose of 1 g/kg, with p < 0.01) . The water extract of leaves and the pseudo-stem of Hedychium coronarium were tested for glucose tolerance in normal rats (with a dose of 1.5 g/kg, with p < 0.01), and water–ethanol extract was tested for glucose tolerance in normal rats (with a dose of 0.8 g/kg, with p < 0.01) . Inflammatory cytokines (TNF-alpha, IL-6, and IL-1beta in pcg/mg protein) were tested in the blood serum of mice by an enzyme-linked immune-sorbent assay (ELISA), which showed the topical synergistic anti-inflammatory activity of three essential oil blends, including 60% Cymbopogon citratus oil, 20% Zenthoxylumarmatum oil, and 20% Hedychium spicatum oil, which is beneficial in inflammatory arthritis . The macrobiotic composition of Hedychium spicatum extract, along with other ingredients, shows health benefits that are effective in facial and body care for acne, dermatitis, eczema , and conditioning of the skin . Various extract- and essential oil-based formulations have been produced for the treatment of immune system disorders such as cancer and lupus . A cracked heel cream composition containing Hedychium spicatum extract may be effective as an anti-inflammatory and used as barrier to protect the skin . A mitochondrial network in fibroblasts was irradiated with UVA (Mito-tracker staining, ATP, and NAD+/NADH titration). Cytotoxicity and viability were checked by an XTT assay. Concentrations of 0.005% and 0.01% extract were non-cytotoxic to fibroblasts . The lysosomal network in fibroblasts was irradiated with UVA (Lysotracker staining). Cytotoxicity and viability were checked by an XTT assay. Concentrations of 0.0005% and 0.01% extract (50% hydro-alcoholic root extract) were non-cytotoxic to fibroblasts . Evaluation of the effect of the root extract on human skin explants was then performed using a pollution model. The extract has the potential to fight against inflammatory stress mediators induced by pollution . The effect of β-endorphin production by normal human keratinocytes was evaluated; treatment of normal keratinocytes with extract concentrations of 0.001 to 0.005% induced stimulation of β-endorphin release . The antioxidant potential of the ethanol extract of the leaves and pseudo stem of Hedychium coronarium was evaluated in normal human keratinocytes (using ROS detection with an H2DCFDA probe). The extract showed inhibitory action on ROS production with a concentration of 0.01%, p < 0.122 . The effect of the ethanol extract of leaves and the pseudo stem of Hedychium coronarium on autophagic activity in fibroblasts was evaluated by irradiation with blue light (according to MDC assay, the autophagic activity of fibroblast cells decreases by 13–35% at concentrations of 0.001% to 0.005%) . Hedychium plants are easily available in the Himalayan regions of India and China . Hedychium plants are used in gardening and for some traditional applications. Utilization of plants for suitable and precise medicinal activity is needed. In our study, we found Hedychium species are abundant with terpenes and terpenoids. Diterpenes and diterpenoids i.e., Hedychenone, Hedychilactone D, Coronarin D, Coronarin D ethyl ether etc. may have future potential as anti-inflammatories, and may be effective against inflammatory mediators and precursors. Terpenoids are hexane-soluble, while hexane extracted materials are not likely to be used for human intake. Suitable green technology may help the extract to become more prominent in human uses, such as super critical fluid extraction, and ethanol-based extraction methods. Extracts and individual compounds were isolated from Hedychium spicatum and Hedychium coronarium, and both species were studied in vitro for anti-inflammatory, antifungal, antimicrobial, and cytotoxic qualities . Identification of possible pharmacological pathways and the specific medicinal activity of individual drug candidates may help to improve and strengthen plant-based herbal API (active pharmaceutical ingredients). Screening of pharmacological activity using artificial learning/machine learning (AL/ML) methods helps to draw conclusions on the specificity of a given compound. Computational approaches such as docking, MD simulation, virtual screen for different and most suitable protein-binding sites, and ADMET study for stabilizing pharmacokinetic parameters may be used To increase compounds’ potency and receptor specificity, de novo drug design and receptor-based drug design approaches may be used. Terpenes are used in perfumery and aromatherapy due to their pleasant odor and aromatic effect. This study found that Hedychium species are abundant with aromatic properties. Hedychium species contain monoterpenes and sesquiterpene present in the essential oil of leaves, flowers, rhizomes, and roots. Terpenes are chiral in nature, meaning they provide distinct physiological characteristics to the compounds, such as odor, medicinal activity, and toxicity. Unsaturation is a key feature of terpenes, as is the presence of oxygen atoms in terpenoids. Hexane, ethyl acetate, and chloroform extracts of plant parts of the Hedychium species contain various diterpenes, diterpenoids, and labdane-type diterpenoids. Methanol, ethanol, hydroalcoholic, and aqueous extract fractions contain polyphenolic compounds, flavonoids, xanthones, and some glycosides. The essential oil of Hedychium plant parts contains aromatic, anti-inflammatory, and antimicrobial activity due to mono and sesquiterpenes. Hexane, ethyl-acetate and chloroform fractions exhibit significant anti-inflammatory, cytotoxic, antimicrobial activity due to terpenoids and labdane diterpenes. The methanol and ethanol extracts of Hedychium plant parts contain antioxidant and hydroalcoholic extract and possess antioxidant and bronchodilator activity. Herbal ancient medication using the different parts (e.g., leaves, rhizomes, and flowers) of Hedychium extracts has effective anti-inflammatory, analgesic, antidiabetic, and anti-asthmatic qualities; it is also used as an antidote to snake bites, and in various other synergistic activities. This is because in traditional medication systems, the plant’s whole parts, dried powder and hydroalcoholic or alcoholic extract are used, which contain all the active molecules needed to treat a disease or ailment. Out of 100 species of Hedychium species, only Hedychium coronarium , Hedychium spicatum , Hedychium gardnerianum , Hedychium cylindricum Rid , Hedychium flavescens , Hedychium venustum , Hedychium coccineum , Hedychium ellipticum , Hedychium flavescenscarey , Hedychium longicornutum , Hedychium forrestii , Hedychium yunnanense , and Hedychium aurantiacum were studied. Our study suggests that the extracts and essential oils of Hedychium spicatum , Hedychium coronarium , Hedychium ellipticum , Hedychium aurantiacum , Hedychium gardnerianum have anti-inflammatory, anti-pyretic, skin protective (via sunscreen), and anti-bacterial effects. The chemical constituents present in hexane, chloroform, and ethyl acetate, and in the methanolic extract of Hedychium spicatum and Hedychium coronarium, i.e., Hedychenone, Hedychilactone D, Coronarin D, Coronarin E, 9-Hydroxy Hedychenone, 7-Hydroxy Hedychenone, Yunnacoronarin A and Coronarian D methyl ether, Coronarian D ethyl ether (diterpenes and diterpenoids), are identified as having anti-inflammatory, anti-allergic, antibacterial, and cytotoxic effects. The study comprises useful findings regarding the extracts and isolated compounds of Hedychium species, which may be fruitful in the commercialization of specific compounds or extracts obtained from the Hedychium genus.
|
Obesity in Adults: Position Statement of Polish Association for the Study on Obesity, Polish Association of Endocrinology, Polish Association of Cardiodiabetology, Polish Psychiatric Association, Section of Metabolic and Bariatric Surgery of the Association of Polish Surgeons, and the College of Family Physicians in Poland
|
02c4aa0d-6d9e-4141-91be-ebf93c066f56
|
10097178
|
Physiology[mh]
|
The World Health Organization (WHO) recognized obesity as a disease in the last century. It was included in the International Classification of Diseases (ICD-10) under number E66 . In recent years, the incidence of overweight and obesity has been systematically increasing, and the time of the COVID-19 pandemic caused a deterioration in the mental health of societies. Numerous studies indicate that many people began to deal with negative emotions with food during the pandemic, which may result in a further increase in the incidence of obesity. Despite this, obesity is one of the diseases rarely diagnosed and even less frequently treated. As international research shows, one of the reasons is the lack of knowledge of doctors about the diagnosis and treatment of obesity. Patients with obesity experience inequalities in health and limitations in self-determination not only because of the underlying disease but also when they develop other chronic diseases due to the lack of equipment, negative attitudes of medical staff caused by stereotypical thinking, lack of knowledge in the field of obesity treatment, and inability to refer the patient to the obesity treatment center . Numerous global and European guidelines on the diagnosis and treatment of obesity signed by various societies were published. However, cultural and healthcare organizational differences require adaptation of recommendations at national levels. This is the first joint position statement of the Polish Association for the Study on Obesity, the Polish Association of Endocrinology, the Polish Association of Cardiodiabetology, the Polish Psychiatric Association, the Section of Metabolic and Bariatric Surgery of the Society of Polish Surgeons, and the College of Family Physicians in Poland. The expert panel’s goal was to develop comprehensive, evidence-based guidelines addressing the prevention, diagnosis, and treatment of obesity and its complications in adults. The aim of the recommendation was to assist health care providers—family doctors, nurses, physiotherapists, registered dietitians, and psychologists in diagnostics and effective treatment of obesity in adults. This search was conducted by using PubMed/MEDLINE, Cochrane Library, Science Direct, MEDLINE, and EBSCO databases from January 2010 to December 2022 for English language meta-analyses, systematic reviews, randomized clinical trials, and observational studies from all over the world. The websites of scientific organizations, such as WHO and EASO, were also searched. Six main topics, restricted to adults, were defined: (1) definition, causes, and diagnosis of obesity; (2) treatment of obesity; (3) treatment of main complications of obesity; (4) bariatric surgery and its limitations; (5) the role of primary care in diagnostics and treatment of obesity and barriers; and (6) recommendations for general practitioners, regional authorities and the Ministry of Health. 3.1. Obesity—Definition The WHO defines obesity as excessive or abnormal fat accumulation causing deterioration of health. In turn, overweight is named pre-obesity when the excess of fat does not yet meet the criteria for diagnosing obesity. Obesity is a chronic disease without a tendency to spontaneously resolve and with a tendency to relapse. The etiology of obesity is complex, and various causative factors are a reason for increased food consumption not balanced by physical activity, which results in a long-lasting positive energy balance and storage of the excess energy in adipose tissue . 3.1.1. Diagnostic Tools and Data Interpretation Despite numerous reservations about the sensitivity of body mass index (BMI) in diagnosing obesity and factors that may affect the false positive or negative result, this simple indicator remains the main criterion for diagnosing overweight and obesity. We currently have two BMI cut-off points for diagnosing obesity in adults, proposed by the WHO in 1998 and by the American Association of Clinical Endocrinologists and the American College of Endocrinology in 2016 . The comparison of these cut-off points is presented in . We recommend using the cut-off points proposed by the American Association of Clinical Endocrinologists and the American College of Endocrinology because the presence of complications indicates clinically overt obesity, not pre-obesity. 3.1.2. Determination of the Severity of the Disease According to the Judgment of the Clinician It should also be noted that visceral obesity (abnormal intraabdominal fat accumulation related to a higher risk of obesity complications) must be diagnosed based on waist circumference measurement in subjects with a BMI from 18.5 kg/m 2 to 35.0 kg/m 2 . Visceral obesity should be diagnosed in adults according to the International Diabetes Federation (IDF) cut-off points of waist circumference in women > 80 cm and in men > 94 cm (referring to Caucasians). Waist circumference should be measured at the level of the midpoint between the bottom edge of the lowest rib and the upper edge of the iliac crest . If possible, body composition should be measured using the bioimpedance method. The percentage of fat mass > 25% in men and >35% in women allows for the diagnosis of obesity in adults, while values 20.0–24.9% in men and 30.0–34.9% in women allow for the diagnosis of overweight . 3.2. Causes of the Development of Obesity As was mentioned above, excessive fat accumulation is the result of long-lasting positive energy balance. However, the primary causes leading to a positive energy balance may vary from patient to patient. The primary cause of excessive food intake diagnosis is necessary for proper therapeutic decision-making and the most effective treatment. 3.2.1. Environmental Factors The civilization development that has taken place in recent decades is conducive to the creation of a positive energy balance. The widespread use of motorization and automation in everyday life and professional work has reduced energy demand related to decreased physical activity. At the same time, the structure of consumed food unfavorably changed towards highly processed and low in dietary fiber. The manufacturer’s focus on lowering prices and mass production has significantly reduced the quality of food products and, at the same time, their energy density increased. In the Polish population, there is a growing trend in the consumption of fat and sugar. In the analysis of energy consumption in 167 countries conducted in 2018, Poles were ranked 10th . In addition, Polish research showed low fiber intake in the middle-aged Polish population (16 g per day, referring to the lowest recommended 25 g per day) . In recent years, due to the observed social, cultural, economic, and political changes, a concept of an obesogenic environment has emerged. That is, the living environment of the individual promotes and strengthens the behavior of the individual leading to a positive energy balance . In this context, attention should also be paid to factors shaping consumer behavior, such as food advertising aimed at children and parents. These advertisements often use a subliminal message indicating the health benefits of certain products or the promotion of sweets by sports celebrities. The dynamic of expenditures on advertising by fast food restaurants and companies producing sweets in Poland has an upward trend . Numerous studies have also shown the influence of the work environment on eating behavior. These factors include lack of conditions for regular meals, reduced physical activity, long working hours resulting in a heavy evening or night meal, shift work, disturbed sleep patterns or work-related stress and job dissatisfaction, and difficulties experienced in the workplace, including discrimination . These causes should be assessed on the basis of a carefully collected medical history. 3.2.2. Genetic Factors Mutations within a single gene or chromosomal changes involving several genes may be the cause of congenital disease syndromes, most often multi-symptom, in which severe obesity develops already in childhood. Almost 100 such syndromes have been described, although some of them do not yet have a name and are not genetically well characterized, and some of them can be found in the literature under several terms. At least 7% of non-syndromic early-onset severe obesity (NESO) has been shown to be the result of a single gene mutation. Most of these mutations concern genes encoding enzyme or receptor proteins involved in the regulation of the leptin–melanocortin pathway in the hypothalamus (e.g., LEP , LEPR , PC1/PC3/PCSK1 , POMC , and MC4R genes) or having a significant impact on the development of this part of the brain ( SIM1 , BDNF , and NTRK2 genes). Obesity, conditioned by polymorphisms of many genes, is the most common form of the disease. From a population point of view, significant mutations of single genes or chromosomal aberrations are only a margin of the problem. There is no single ‘obesity gene’ responsible for the development of this condition. The research on the so-called ‘candidate genes’ related to the regulation of hunger, satiety, development of adipose tissue, energy expenditure, or metabolic changes allowed us to isolate several hundred genes whose specific single nucleotide polymorphisms (SNPs) only favor the development of obesity . In the vast majority of cases of familial obesity, the reproduction of unfavorable eating habits and patterns of spending free time play a greater role than genetic factors. In addition, family dysfunction (too little or too much parental care, inability to show affection, and excessive parental demands) is an important risk factor for the development of eating disorders . The diagnosis of monogenic obesity should be considered in patients with a history of third-degree obesity, which begins in early childhood, especially if the implementation of all adequate treatment methods does not achieve a therapeutic effect. 3.2.3. Emotional Eating and Eating Disturbances (Binge Eating Syndrome and Night Eating Syndrome) In recent years, numerous studies have focused on the psychological basis of the development of obesity. The main strands of these studies concern personality traits and the dysfunction of the reward system . Not recognizing these disorders and not incorporating appropriate therapeutic methods may be the main cause of obesity treatment failures. The personality traits related to the risk of the development of emotional eating and eating disturbances included impulsivity (tendency to act rapidly without consideration of consequences), disinhibition, neuroticism, extraversion, sensation seeking, inattention, insufficiency inhibitory control, and the lack of cognitive flexibility . The biological aspect of food intake regulation (hunger and satiety) is associated with the response of neurotransmitters in the hypothalamus to hormonal signals from the digestive tract and adipose tissue. However, the second place affecting food intake and eating behavior is the reward system (the amygdala/hippocampus, insula, orbitofrontal cortex [OFC], and striatum). The dysfunction of the reward system, especially decreased dopamine secretion, is associated with feeling appetite, also named food craving (the need to eat for pleasure, not for hunger). Emotions play a significant role in triggering processes of motivation to seek reward, learning, and consolidation of eating behavior arise, while the cognitive control of eating behaviors is localized in the prefrontal cortex . Emotional Eating (EE) Emotional eating, formerly called stress eating is a noneffective strategy for dealing with emotions with food. Emotions cause stress in the body and activation of the hypothalamic-pituitary-adrenal axis. In turn, cortisol inhibits dopamine release in the reward system and slows down the inhibitory-control pathway . The COVID-19 pandemic worsened human mental health. Numerous studies have shown that many people cope with negative emotions with food . Thus, this cause of the development of obesity should be included in the diagnosis work-up. It should be noted that over time EE may worsen, and binge eating disorder may develop. All patients with obesity should be assessed for EE. In everyday clinical practice, the screening tool presented in should be used . Binge Eating Disorder (BED) In accordance with the Diagnostic and Statistical Manual of Mental Disorders (DSM), the fifth edition, BED should be diagnosed if consuming unusually large amounts of food in a short time with a loss of control that occurs at least once per week for 3 months. In addition, at least three of the following must be present: consuming food more rapidly than normal, eating until uncomfortably full, consuming large amounts of food without the feeling of hunger, eating alone to avoid shame or feeling disgusted with oneself, and depression, or guilt after gluttony without any regular compensatory behavior . Of note, BED may be a primary cause of the development of obesity and also may secondarily develop in people suffering from obesity as a result of using numerous short-term diets . The extreme form of BED is food addiction. Symptoms of food addiction include a compulsion to eat food, lack of control over food intake, physiological withdrawal symptoms, development of tolerance (i.e., the need to eat more and more food), neglecting other activities that may give pleasure, denying that there is a problem with eating control, and continuing behaviors related to food intake despite knowing that they are harmful. Night Eating Syndrome (NES) NES is diagnosed in subjects with recurrent episodes of excessive food consumption after dinner or eating after awakening from sleep and at the least three of the following: morning anorexia, a strong urge to eat between dinner and sleep and/or during the night, sleep onset and/or maintenance insomnia, frequently depressed mood or mood worsening in the evening, and a belief that one cannot go back to sleep without eating . All patients with obesity should be assessed for BED and NES . Both BED and NES often coexist with depression and anxiety. Depression and anxiety should be diagnosed in all patients with obesity based on Hospital Anxiety and Depression Scale (HADS). 3.2.4. Obesity Associated with Hormonal Disturbances Obesity can develop in the course of some endocrinopathies, including: Cushing’s syndrome, ACTH dependent (Cushing’s disease), and ACHT independent; Hypothyroidism in the course of primary or secondary thyroid dysfunction; Pituitary dysfunction in the form of multihormonal hypofunction of this gland, including growth hormone deficiency; Damage to the hypothalamus with the impaired secretion of hypothalamic neurohormones. The guidelines of the European Society of Endocrinology (ESE) from 2020 contain the current recommendations of the year regarding their diagnosis . Cushing’s Syndrome The prevalence of hypercortisolism in people with obesity is estimated at 0.9%. Routine testing for hypercortisolemia is not recommended except when suspected on clinical examination (blue-red skin stretch marks, bruising, or proximal muscle weakness) and resistant hypertension. Conducting laboratory hormonal tests is not recommended in patients with iatrogenic Cushing’s syndrome, especially those undergoing chronic glucocorticoid therapy. In patients with obesity, in whom bariatric surgery is planned, tests to exclude hypercortisolemia should be considered . Diagnostic tests: - Inhibition test with 1 mg dexamethasone; - Assessment of free cortisol concentration in a 24 h urine collection or late evening cortisol concentration in saliva; - If there is confirmed endogenous hypercortisolism, then measure ACTH levels and plan imaging tests . Hypothyroidism Overt hypothyroidism occurs in 14% of patients with obesity, subclinical hypothyroidism in another 14.6% of patients with obesity, and their frequency is significantly higher than in the general population (in Europe, overt hypothyroidism occurs with a frequency of 0.2–5.3%, and subclinical hypothyroidism is more often of 4–10%), the incidence of undiagnosed hypothyroidism is estimated at about 5%. The assessment of thyroid function is recommended in all patients with obesity . Diagnostic tests: - Serum TSH levels as part of tests performed in all people with obesity, regardless of the presence of symptoms suggesting thyroid dysfunction; - Free thyroxine (FT 4 ) and anti-thyroid peroxidase (anti-TPO) antibodies are recommended to be measured if elevated TSH is found. The reference ranges for TSH and FT 4 in patients with obesity are the same as in the general adult population . An ultrasound examination of the thyroid gland is recommended for a full assessment of the thyroid gland, although the ESE guidelines do not require routine thyroid ultrasound examination in obese patients if no abnormalities are found in the physical examination of the thyroid gland. Pituitary Dysfunction in the Form of Multihormonal Hypofunction of the Pituitary Gland, Including Growth Hormone (GH) Deficiency, and Rare Damage to the Hypothalamus with Impaired Secretion of Hypothalamic Neurohormones They occur most often after surgery or radiotherapy in the area of the hypothalamus and pituitary gland and may be caused by compression (tumors, craniopharyngioma, or metastases), ischemia, trauma, sarcoidosis, storage diseases (hemochromatosis and histiocytosis), autoimmunity (lymphocytic hypophysis) and infectious factors. Diagnostic tests: - Serum GH, FSH and LH, TSH, ACTH, and PRL levels; - Serum insulin-like growth factor type 1 (IGF-1), estradiol, testosterone, cortisol, fT 3 , and fT 4 levels; - Stimulating tests (with insulin, with arginine, with GH-RH, with LH-RH, and with CRH) . 3.2.5. Medication-Related Obesity Glucocorticoids Weight gain occurs in approximately 70% of patients treated with glucocorticoids, of which 20% exceed 10 kg. The effect of glucocorticoids on food intake is complex and includes both changes in the secretion of neurotransmitters responsible for the regulation of satiety and hunger in the hypothalamic nuclei, as well as neurotransmitters responsible for the hedonic aspect of food intake in the reward system . Hypoglycemic Drugs Hypoglycemic drugs promoting weight gain include insulin, insulin analogs, sulfonylureas, and thiazolidinediones. Weight gain during insulin and insulin-analogs use is dose-dependent, related to stimulation of food intake, episodes of hypoglycemia, and fluctuating glucose concentrations. Many patients eat not only when symptoms of hypoglycemia appear but also because they are afraid of their occurrence. Sulfonylureas stimulate the secretion of endogenous insulin, which may result in hypoglycemia and significant fluctuations in glucose levels, and, consequently, in increased food intake. Weight gain also occurs with thiazolidinediones and is dose-proportional. The weight gain effects of these drugs include fluid retention, increased storage of triglycerides in adipocytes, and enhanced adipogenesis. Interestingly, the accumulation of adipose tissue primarily occurs in the visceral deposit . Antihypertensive Drugs It has been known for many years that the use of beta-adrenergic antagonists (except carvedilol and nebivolol) causes weight gain in some people, and it is associated with genetic variants of the beta-adrenergic receptors. These drugs decrease energy expenditure—the basal metabolic rate by 4–9% and postprandial thermogenesis by 25%. They also inhibit the activity of hormone-sensitive lipase and, in consequence, lipolysis. In addition, one of the side effects may be weakness and fatigue and, in consequence, decreased physical activity . Psychotropic Medication Neuroleptics Up to 80% of patients on atypical neuroleptics gain 20% or more of their normal weight. These drugs increase food intake by affecting the reward and punishment systems and increasing appetite. This is the result of their antagonistic effect on dopaminergic type 2 (D2) and serotonin type 2A (5-HT2A) receptors. They can also significantly affect histamine (H1) receptors and, to a lesser extent, α 1 -adrenergic and serotonergic type 2C (5-HT2C) receptors. The fact that there are differences in the amount of weight gain when using different drugs of this class is related to their potency in blocking the activity of particular receptors. The risk of weight gain during neuroleptics use is presented in . Antidepressants It should be noted that in a quarter of patients using antidepressants, weight gain is observed. The risk factors of weight gain associated with antidepressant use include the type of medication, duration of pharmacotherapy, female sex, and overweight or obesity before initiation of the treatment. The risk of weight gain related to antidepressant use is presented in . Weight gain may also be experienced by patients with bipolar disorder, treated with lithium or valproate. Antiepileptic drugs Weight gain has been observed in 71% of those treated with valproic acid and 43% of those treated with carbamazepine. Weight gain occurred less frequently when treated with pregabalin and gabapentin. There was no change in body weight during treatment with lamotrigine, levetiracetam, and phenytoin. On the other hand, felbamate, topiramate, and zonisamide cause weight reduction through an unknown mechanism . 3.3. Consequences and Complications of Obesity Obesity is a chronic disease that can lead to disability. Musculoskeletal, cardiovascular, and mental diseases are the three most common reasons for people with obesity to enter a disability pension. Moreover, people with obesity are more likely to lose their jobs, retire more often, take sick leave more often, are less productive at work, and are more likely to be injured in the workplace. In addition, often, people with childhood obesity develop physical disabilities at a young age and do not enter the labor market at all. It was also shown that being obese at the age of 18 increased the risk of taking disability benefits by 35%. It has also been observed that an increase in BMI by 1 kg/m 2 increases the risk of physical disability by 5%. Factors that increase the risk of developing disability in patients with obesity are anxiety and depressive disorders . A systematic review of studies conducted in European countries showed that patients with obesity took about 10 days longer sick leave per year than those of normal weight. The risk of taking sick leave lasting from 2 weeks to 12 weeks was 34% higher among patients with obesity and longer than 3 months by 63% . Being overweight or obese predisposes to the development of numerous dangerous complications, including metabolic, mechanical, and other. 3.3.1. The Metabolic Complications of Obesity These complications developed as a result of excessive accumulation of visceral adipose tissue with local inflammation, adipokine secretion disturbances, and insulin resistance. The adipose tissue becomes inefficient in the field of energy storage and comes to ectopic accumulation of fat in the liver and skeletal muscle and the development of insulin resistance. Systemic inflammation, changes in adipokine secretion, insulin resistance, and hyperinsulinemia are the key links in the development of obesity complications in adults, such as: Nonalcoholic fatty liver disease (NAFLD), currently called metabolic-associated fatty liver disease (MAFLD); Pre-diabetes (impaired glycemia fasting [impaired fasting glucose (IFG)] and impaired glucose tolerance [impaired glucose tolerance (IGT)] and type 2 diabetes; Atherogenic dyslipidemia (decreased HDL-C, elevated TG, at frequent slight changes in TC and LDL-C concentrations); Cardiovascular diseases (hypertension, coronary artery disease, carotid atherosclerosis, and stroke); Obesity-induced glomerulopathy; Cancers (e.g., colon, breast, and endometrium); Hormonal disturbances that lead to infertility in women (functional hyperandrogenism and polycystic ovary syndrome [PCOS]) and men (hypogonadism). Non-Alcoholic Fatty Liver Disease (NAFLD)/Metabolic-Associated Fatty Liver Disease (MAFLD) The diagnostic criteria of NAFLD are hepatic steatosis > 5% and exclusion of secondary causes of liver disease, including ‘significant’ alcohol usage. While the diagnostic criterion for MAFLD formulated in the year 2020 by an expert group utilizing a two-stage Delphi consensus is hepatic steatosis > 5% and metabolic risk divers, such as type 2 diabetes and overweight/obesity by ethnic-specific BMI classifications. In people with normal weight for the diagnosis of MAFLD, hepatic steatosis > 5% and two of seven risk factors are needed, including waist circumference > 102 cm in Caucasian men and >88 cm in Caucasian women; blood pressure > 130/85 mmHg or hypotensive therapy; plasma triglycerides > 150 mg/dL or specific drug treatment; plasma HDL cholesterol < 40 mg/dL for men and <50 mg/dL for women or specific drug treatment; prediabetes (fasting glucose levels 100–125 mg/dL or 2 h post-load glucose levels 140–199 mg/dL or HbA1c 5.7–6.4%); homeostasis model assessment of insulin resistance score > 2.5; and plasma C -reactive protein levels (high sensitivity CRP) > 2 mg/L . In all patients with overweight or obesity, an ultrasound of the liver should be performed. In patients with normal weight, the risk factors should be assessed. MAFLD is a progressive process from steatosis by inflammation and fibrosis to cirrhosis or hepatocellular carcinoma. However, the leading causes of premature death among people with NAFLD are cardiovascular complications . The noninvasive method of fibrosis assessment is the FIB4 test (age, activity of ALT and AST, and the number of platelets) . In patients with MAFLD, the main method of treatment is the effective management of obesity. However, the weight reduction should be no more than 0.5 kg per week. Too fast weight loss induces the formation of lithogenic bile and may cause increased fatty liver . There is no safe amount of alcohol for MAFLD . If indicated, pharmacotherapy appropriate to the severity of carbohydrate and lipid metabolism disturbances caused by MAFLD should be used . The use of ursodeoxycholic acid (UDCA) in a dose of 10–15 mg/kg/day should be considered . Prediabetes and Type 2 Diabetes Prediabetes includes impaired fasting glucose (IFG) related to fatty liver and its insulin resistance and impaired glucose tolerance (IGT) related to muscle fat and its insulin resistance. Disturbances associated with IFG result in an increase in hepatic glucose production and fasting hyperglycemia, but during activity, its plasma concentration gradually decreases as it is used as energy by muscles that remain insulin sensitive. In isolated IGT, on the other hand, muscle insulin resistance and a defect in the second phase of insulin secretion result in long-term hyperglycemia. In addition, the post-prandial release of glucagon-like peptide-1 is impaired, resulting in decreased insulin secretion . It is estimated that about 70% of people with prediabetes will develop type 2 diabetes in the future if obesity is not effectively treated. Prediabetes is an early stage in the development of type 2 diabetes. The progression of carbohydrate metabolism disorders towards diabetes is associated with impaired compensatory insulin secretion due to increased apoptosis of pancreatic islet β cells, which is facilitated by both impaired GLP-1 secretion and increased release of pro-inflammatory cytokines and leptin by visceral adipose tissue, as well as increased glucagon secretion and increased hepatic glucose synthesis as well as progressive changes in skeletal muscle metabolism . All patients with overweight, obesity, or visceral obesity should be screened for IFG by testing fasting blood glucose at least 12 h after the last meal. Patients with fasting glucose levels 100–125 mg/dL should have an oral glucose tolerance test of 75 mg. Diabetes should be diagnosed based on the criteria of the Polish Diabetology Society . The main method of treatment in patients with prediabetes and type 2 diabetes is the effective management of obesity. Metformin is recommended in patients with prediabetes as first-line therapy in most patients with type 2 diabetes. Other classes of hypoglycemic agents are useful in combination with metformin or when metformin is a contraindication or not tolerated. Their selection should base on the balance between the efficacy and side effect profile. All patients with type 2 diabetes and established or subclinical cardiovascular disease should be treated with the GLP-1 RA class or SGLT2i class . Atherogenic Dyslipidemia Atherogenic dyslipidemia is characterized by elevated serum triglyceride levels of at least 150 mg/dL (~1.7 mmol/L), elevated serum levels of very low-density lipoprotein (VLDL)-rich triglycerides, and decreased HDL cholesterol in men below 40 mg/dL (~1 mmol/L) and in women below 45 mg/dL (~1.2 mmol/L). Serum LDL may be normal or elevated, with an increased percentage of oxidized particles (oxLDL). These abnormalities are the results of fatty liver and increased triglycerides and VLDL production, as well as decreased HDL cholesterol synthesis . Atherogenic dyslipidemia is associated with a residual risk of developing coronary heart disease in patients with serum LDL levels of 70 mg/dL or less, to a similar or greater extent than in the overall group . The diagnosis of atherogenic dyslipidemia is based on the assessment of the lipid profile. Measurement of lipid profile should be performed in all people over 40 years and in all younger persons with factors of cardiovascular risk, including obesity and MAFLD . The main method of treatment of atherogenic dyslipidemia is the management of obesity. In addition, statins in combination with fibrates or omega-3 fatty acids should be used . Arterial Hypertension Obesity is a major risk factor for developing arterial hypertension. The links to the pathogenesis of obesity-induced hypertension are complex, but each of them is based on excess visceral adipose tissue. These include inflammation and endocrine dysfunction of adipose tissue, insulin resistance, endothelial dysfunction, increased sympathetic nervous system activity, activation of the renin–angiotensin–aldosterone system, dysfunction of the natriuretic peptide system, and a rare development of obesity-related glomerulopathy. The effect of these changes is increased cardiac output, peripheral vasoconstriction, and impaired pressure natriuresis (water and sodium retention and increased blood volume) . The diagnosis of arterial hypertension should not be based on a single blood pressure measurement taken during a single visit. Exceptions are rare situations in which blood pressure is significantly elevated (grade 3 arterial hypertension) or if there is clear evidence of complications of arterial hypertension (e.g., left ventricular hypertrophy, hypertensive retinopathy with effusions and petechiae, or kidney damage). In people with mean blood pressure values below 180/110 mmHg, arterial hypertension should be diagnosed on the basis of at least two blood pressure measurements taken during at least two separate visits. It should be noted that the basis for the diagnosis and treatment of arterial hypertension is still measurements made in the doctor’s office. However, arterial hypertension can also be diagnosed based on out-of-office measurements, i.e., ambulatory blood pressure monitoring (ABPM) and home measurements. In most patients, blood pressure should be measured using a standard arm cuff (width 12–13 cm, length 35 cm); if the patient’s arm circumference is >32 cm, a larger cuff should be used. At least 30 min before the measurement, the patient should refrain from consuming coffee, smoking cigarettes, and taking other stimulants. The measurement should be performed after at least five minutes of rest in a sitting position with the back supported in a quiet room with maintained thermal comfort. To determine the value of blood pressure, calculate the average of the last two measurements; at least three times, pressure measurements should be taken as a standard performed during the same visit at 1–2 min intervals. If blood pressure varies between measurements (>10 mmHg), additional measurements should be taken. At the initial assessment, all patients should undergo an orthostatic test, taking blood pressure measurements at 1 and 3 min after the change from sitting to standing position . The therapeutic goals in patients with obesity under 65 years are blood pressure values 120–129/70–79 mmHg, in patients aged 65–80 years 130–139/70–79, and in over 80 years 130–150/70–79 . An important part of arterial hypertension management is weight reduction. Combined pharmacotherapy is recommended in obese patients. Combinations of an angiotensin-converting enzyme inhibitor (ACE-I) or angiotensin receptor blocker (ARB) with a diuretic or calcium channel blocker (CCB) should be used as first-line drugs. In the second step, if the therapeutic goal is not achieved, triple therapy is preferred (a combination of ACE-I or sartan with a CCB and a diuretic, with separate therapy when there are indications for treatment with β-blockers). In the third step, a fourth drug should be added . Obesity-Related Glomerulopathy (ORG) In patients with obesity, there is an increase in renal blood flow and glomerular filtration, resulting in dilation of the afferent glomerular arterioles. The links of ORG pathogenesis are glomerular hyperfiltration, insulin resistance and hyperinsulinemia, hyperleptinemia, reduced anti-inflammatory effect of adiponectin, and chronic inflammation. Hyperleptinemia results in an increased secretion of fibroblast growth factor β (TGF-β), which stimulates the proliferation of endothelial and mesangial cells and the overproduction of extracellular matrix. Hyperinsulinemia not only stimulates myocyte proliferation in the media of the arteries but also promotes glomerulosclerosis by stimulating collagen synthesis. An additional factor involved in the pathogenesis of ORG is dyslipidemia. Characteristic symptoms of ORG are proteinuria and gradual impairment of renal excretion. ORG is a progressive nephropathy, and the rate of its progression depends on the occurrence of complications of obesity, such as arterial hypertension and type 2 diabetes . ORG is a rarely diagnosed entity based on kidney biopsy in patients with high-range proteinuria. The Main Hormonal Disturbances Growth hormone (GH) deficiency A decrease in GH and IGF-1 levels may be considered as obesity complication in patients without a pituitary disease. The routine determination of GH and IGF-1 in patients with obesity is not recommended . Hypogonadism in Men It occurs in 32.7% (up to 45%) of men with obesity. In all men with obesity, an assessment of symptoms of hypogonadism (decreased libido, erectile dysfunction, infertility, muscle weakness, gynecomastia, gynoid type of fat distribution, and androgenic hair loss) should be conducted. Hormonal work-up in men without these symptoms is not recommended. Diagnostic tests: - Serum concentrations of total and free testosterone, sex hormone-binding globulin (SHBG), FSH, LH, and PRL. The reference ranges for serum testosterone levels in men with obesity are age-specific. Hypogonadism is diagnosed in men with serum testosterone levels ≤ 11 nmol/L (3.2 ng/mL) with the presence of symptoms . Functional hyperandrogenism in women and polycystic ovary syndrome (PCOS) It occurs in 9.1–29% of women with obesity. The diagnostic is recommended only in women with menstrual disturbances, chronic anovulation, infertility, or/and symptoms of androgenization (hirsutism, androgenetic alopecia, or acne). Diagnostic tests: - Serum concentrations of FSH, LH, PRL, estradiol, total testosterone, and SHBG (between 3–5 days of the menstrual cycle); - Concentrations of androstenedione, 17-hydroxyprogesterone, and progesterone (depending on individual indications). Moreover, an ultrasound examination of the ovaries and determination of plasma glucose concentration is recommended . 3.3.2. Diseases Caused by Mechanical Consequences of Excessive Accumulation Visceral Fat Gastroesophageal Reflux (GERD) Patients with obesity report numerous symptoms related to the function of the esophagus and stomach, including difficulties in swallowing, pain while eating or pain after eating, a feeling of fullness and retention in the stomach, heartburn, or regurgitation, which paradoxically does not translate into weight loss. The factors contributing to the occurrence of these disorders in patients with obesity include increased intra-abdominal pressure and high support of the diaphragm as a result of the accumulation of visceral fat. Disorders are also favored by anatomical and functional abnormalities of the esophagus and stomach, causing abnormal esophageal motility and, thus, esophageal clearance, i.e., the ability to clean the esophagus from food residues or regurgitated contents, lowering the pressure of the lower esophageal sphincter, transient lower esophageal sphincter relaxation, or hiatal hernia . All patients with overweight or obesity should be evaluated for symptoms of GERD, and in patients with these symptoms and if treatment fails to control symptoms, an endoscopy should be performed . The treatment of GERD in patients with overweight or obesity includes at least 10% weight loss and the use of a proton pump inhibitor . Obesity Hypoventilation Syndrome (OHS) OHS is defined as the occurrence of symptoms of hypoventilation in patients with obesity when all other potential causes of hypoventilation have been excluded . OHS is a result of hypoxemia observed during physiological sleep deepens and hypercapnia increases to pathological values that colloquially meet the definition of respiratory failure, i.e., a state in which the partial pressure of oxygen in arterial blood falls below 60 mmHg (PaO 2 < 60 mmHg), or an increase in the PCO 2 ≥ 45 mmHg. The primary pathomechanism of hypercapnia is hypoventilation. However, it should be emphasized that hypercapnia, in the case of pure, untreated OHS, is always accompanied by hypoxemia (type 2 respiratory failure). Compensation for respiratory acidosis is renal production of bicarbonates (HCO 3− ). The specificity of respiratory failure in the course of OHS is the fact that it begins to insidiously appear at night, especially during REM sleep, then appears and persists during NREM sleep, to finally become also consolidated during the day. In more advanced cases, respiratory disturbances occur around the clock and are no longer compensated by daily hyperventilation . Clinical symptoms of OHS include impaired concentration, excessive daytime sleepiness, decreased exercise tolerance, and morning headaches . All patients with obesity, especially II- and III-grade, should be evaluated for OHS. Obesity management is an essential element of therapy. Sleep Apnea Syndrome (OSA) OSA is the result of sleep-related decreased airflow and oxygenation. The increase in body weight significantly increased the risk of developing OSA. Neck circumferences above 40.6 cm in women and 43.2 cm in men are associated with an increased risk of OSA . Symptoms include loud snoring, interruptions (apneic or hypopnea pauses) in breathing, and sleep-cycle fragmentation that, in turn, produce daytime fatigue, morning headache, lack of concentration, erectile dysfunction, and a general decrease in quality of life. In patients with these symptoms, polysomnography should be considered. Management of obesity is an essential part of therapy . 3.3.3. Mechanical Damage Caused by Excessive Load Osteoarthritis Numerous studies indicate an indisputable relationship between obesity and the development of knee osteoarthritis. A correlation was found between the diagnosis of obesity and the development of various deformities of the knee, which is believed to be a mechanical factor in the development of obesity-dependent gonarthrosis . Obese people adapt to their body weight by walking more slowly with their feet wider apart. They experience greater loads affecting the joints of the lower limbs, which predispose them to damage. Obesity is associated with structural disorders as well as impaired gait function, flattening of the arches of the foot, and excessive pronation in the ankle joint. When walking, there is an increase in the mobility of the rear foot, and this causes forefoot abduction to a greater extent than in a normal-weight person. Being overweight leads to increased pressure on the loaded joints. Postural instability leading to falls was found in people diagnosed with III-degree obesity . Obesity is a significant risk factor for pain in the neck, shoulder, elbow, wrist, and hand. Obesity in professionally active individuals predisposes to the development of tendinitis in the upper limbs. Numerous studies indicate the risk of developing ulnar nerve groove syndrome or carpal tunnel syndrome in obese patients, especially those who perform repetitive activities during their professional activity. Obesity significantly increases the risk of rotator cuff tendinitis. Obesity is also a risk factor for greater trochanteric bursitis, a common cause of lateral hip pain in middle-aged and older adults . Spinal pain syndromes caused by degenerative disc disease, stenosis of the spinal canal, and diseases of the intervertebral joints are very common problems in society, causing significant morbidity. This generates significant consequences for work efficiency and utilization of health services. The relationship between obesity and the described diseases is ambiguous. Some studies evaluating this issue find no evidence of a link between obesity and low back pain. However, compared with people with normal weight, obese patients more often suffer from radicular pain and present neurological symptoms . Screening for symptoms and physical examination for osteoarthritis should be performed in all patients with overweight and obesity. Obesity management is an essential part of osteoarthritis treatment . Chronic Venous Disease Epidemiological studies have shown that obesity is the risk factor for varicose veins in both sexes. It has been suggested that the main mechanism of impairing venous function, particularly venous return, and possibly increasing the rate of reflux in patients with obesity is the high pressure in the abdomen . 3.3.4. Other Cholelithiasis The risk of occurrence of cholesterol gallstone formation and symptomatic cholelithiasis increases significantly in patients with obesity and is augmented by weight loss, especially if it is fast. Approximately one-third of stones are symptomatic. The incidence of new gallstone formation is 10–12% after 8–16 weeks of application of a low-calorie diet and above 30% in the first 18 months after gastric bypass surgery. The higher risk of gallstone formation has also been observed in clinical trials that assessed the efficacy and safety of GLP-1 analogs. The additional risk factors for gallstone formation during weight loss include loss of more than 25% of the initial body weight, rate of weight loss above 1.5 kg per week, a very low-calorie diet containing little or no fat, and periods of absolute fasting. Cholelithiasis may be prevented by treatment with ursodeoxycholic acid 500–600 mg per day during the first 6 months of weight loss . Stress Urinary Incontinence Obesity is a major risk factor for urinary incontinence in women, and its frequency and severity increase with an increase in BMI values and duration of obesity. Screening for urinary incontinence should be performed in all women with overweight or obesity . Asthma Asthma symptoms and severity are associated with increased proinflammatory cytokines and adipokines related to obesity. Numerous studies have shown improvement in forced vital capacity after an average 7.5% weight reduction in patients with obesity and asthma. Medical history, symptomatology, and spirometry should be considered in all patients with overweight or obesity with an increased risk of asthma and reactive airway disease . Depression and Anxiety Depression and bipolar disorder (BD) It has been shown that 30–50% of people seeking treatment for obesity had a history of depression or anxiety. The occurrence of depression symptoms in young women is an important risk factor for the development of obesity later in life. Higher BMI values than in the general population have already been observed in adolescents with depression. The association between depression and obesity seems bi-directional. The classic symptom of depression is loss of appetite and weight; however, when mood improves during the treatment, appetite and weight increase. Of note, in patients with obesity, more frequent atypical depression is observed. People with this type of depression deal with negative emotions with food. Food stimulates the release of dopamine in the reward system and temporarily improves mood. In this mechanism, depression may be the cause of the development of eating disorders. On the other hand, obesity may be a cause of the development of depression due to low self-esteem, discrimination, stigmatization, and social exclusion. Therapy in patients with obesity, especially for bipolar disorder, is often less effective than in patients with normal body weight. Some studies have shown that bipolar disorder in patients with obesity is associated with a greater degree of disability, including impairment of memory, concentration, and attention, as well as a greater relapse rate and a more severe course of the disease . The consequences of the coexistence of depression and obesity included worse patient–doctor cooperation, avoidance, and social withdrawal, decreased quality of life, greater severity of depression, greater risk of disability and job loss, suicidal thoughts, and attempts. In patients with obesity, anxiety disorders such as panic attacks and agoraphobia (fear and avoidance of being out in the open and in public places) is twice as common as in normal-weight people. All patients with obesity should be screened for symptoms of depression and anxiety in the GP’s practice using the Hospital Anxiety and Depression Scale. Body weight and metabolic parameters should be monitored in all patients treated for psychiatric diseases. The family doctor should stay in touch with the psychiatrist and undertake joint actions aimed at the effective treatment of mental illnesses and limiting its consequences for physical health. The WHO defines obesity as excessive or abnormal fat accumulation causing deterioration of health. In turn, overweight is named pre-obesity when the excess of fat does not yet meet the criteria for diagnosing obesity. Obesity is a chronic disease without a tendency to spontaneously resolve and with a tendency to relapse. The etiology of obesity is complex, and various causative factors are a reason for increased food consumption not balanced by physical activity, which results in a long-lasting positive energy balance and storage of the excess energy in adipose tissue . 3.1.1. Diagnostic Tools and Data Interpretation Despite numerous reservations about the sensitivity of body mass index (BMI) in diagnosing obesity and factors that may affect the false positive or negative result, this simple indicator remains the main criterion for diagnosing overweight and obesity. We currently have two BMI cut-off points for diagnosing obesity in adults, proposed by the WHO in 1998 and by the American Association of Clinical Endocrinologists and the American College of Endocrinology in 2016 . The comparison of these cut-off points is presented in . We recommend using the cut-off points proposed by the American Association of Clinical Endocrinologists and the American College of Endocrinology because the presence of complications indicates clinically overt obesity, not pre-obesity. 3.1.2. Determination of the Severity of the Disease According to the Judgment of the Clinician It should also be noted that visceral obesity (abnormal intraabdominal fat accumulation related to a higher risk of obesity complications) must be diagnosed based on waist circumference measurement in subjects with a BMI from 18.5 kg/m 2 to 35.0 kg/m 2 . Visceral obesity should be diagnosed in adults according to the International Diabetes Federation (IDF) cut-off points of waist circumference in women > 80 cm and in men > 94 cm (referring to Caucasians). Waist circumference should be measured at the level of the midpoint between the bottom edge of the lowest rib and the upper edge of the iliac crest . If possible, body composition should be measured using the bioimpedance method. The percentage of fat mass > 25% in men and >35% in women allows for the diagnosis of obesity in adults, while values 20.0–24.9% in men and 30.0–34.9% in women allow for the diagnosis of overweight . Despite numerous reservations about the sensitivity of body mass index (BMI) in diagnosing obesity and factors that may affect the false positive or negative result, this simple indicator remains the main criterion for diagnosing overweight and obesity. We currently have two BMI cut-off points for diagnosing obesity in adults, proposed by the WHO in 1998 and by the American Association of Clinical Endocrinologists and the American College of Endocrinology in 2016 . The comparison of these cut-off points is presented in . We recommend using the cut-off points proposed by the American Association of Clinical Endocrinologists and the American College of Endocrinology because the presence of complications indicates clinically overt obesity, not pre-obesity. It should also be noted that visceral obesity (abnormal intraabdominal fat accumulation related to a higher risk of obesity complications) must be diagnosed based on waist circumference measurement in subjects with a BMI from 18.5 kg/m 2 to 35.0 kg/m 2 . Visceral obesity should be diagnosed in adults according to the International Diabetes Federation (IDF) cut-off points of waist circumference in women > 80 cm and in men > 94 cm (referring to Caucasians). Waist circumference should be measured at the level of the midpoint between the bottom edge of the lowest rib and the upper edge of the iliac crest . If possible, body composition should be measured using the bioimpedance method. The percentage of fat mass > 25% in men and >35% in women allows for the diagnosis of obesity in adults, while values 20.0–24.9% in men and 30.0–34.9% in women allow for the diagnosis of overweight . As was mentioned above, excessive fat accumulation is the result of long-lasting positive energy balance. However, the primary causes leading to a positive energy balance may vary from patient to patient. The primary cause of excessive food intake diagnosis is necessary for proper therapeutic decision-making and the most effective treatment. 3.2.1. Environmental Factors The civilization development that has taken place in recent decades is conducive to the creation of a positive energy balance. The widespread use of motorization and automation in everyday life and professional work has reduced energy demand related to decreased physical activity. At the same time, the structure of consumed food unfavorably changed towards highly processed and low in dietary fiber. The manufacturer’s focus on lowering prices and mass production has significantly reduced the quality of food products and, at the same time, their energy density increased. In the Polish population, there is a growing trend in the consumption of fat and sugar. In the analysis of energy consumption in 167 countries conducted in 2018, Poles were ranked 10th . In addition, Polish research showed low fiber intake in the middle-aged Polish population (16 g per day, referring to the lowest recommended 25 g per day) . In recent years, due to the observed social, cultural, economic, and political changes, a concept of an obesogenic environment has emerged. That is, the living environment of the individual promotes and strengthens the behavior of the individual leading to a positive energy balance . In this context, attention should also be paid to factors shaping consumer behavior, such as food advertising aimed at children and parents. These advertisements often use a subliminal message indicating the health benefits of certain products or the promotion of sweets by sports celebrities. The dynamic of expenditures on advertising by fast food restaurants and companies producing sweets in Poland has an upward trend . Numerous studies have also shown the influence of the work environment on eating behavior. These factors include lack of conditions for regular meals, reduced physical activity, long working hours resulting in a heavy evening or night meal, shift work, disturbed sleep patterns or work-related stress and job dissatisfaction, and difficulties experienced in the workplace, including discrimination . These causes should be assessed on the basis of a carefully collected medical history. 3.2.2. Genetic Factors Mutations within a single gene or chromosomal changes involving several genes may be the cause of congenital disease syndromes, most often multi-symptom, in which severe obesity develops already in childhood. Almost 100 such syndromes have been described, although some of them do not yet have a name and are not genetically well characterized, and some of them can be found in the literature under several terms. At least 7% of non-syndromic early-onset severe obesity (NESO) has been shown to be the result of a single gene mutation. Most of these mutations concern genes encoding enzyme or receptor proteins involved in the regulation of the leptin–melanocortin pathway in the hypothalamus (e.g., LEP , LEPR , PC1/PC3/PCSK1 , POMC , and MC4R genes) or having a significant impact on the development of this part of the brain ( SIM1 , BDNF , and NTRK2 genes). Obesity, conditioned by polymorphisms of many genes, is the most common form of the disease. From a population point of view, significant mutations of single genes or chromosomal aberrations are only a margin of the problem. There is no single ‘obesity gene’ responsible for the development of this condition. The research on the so-called ‘candidate genes’ related to the regulation of hunger, satiety, development of adipose tissue, energy expenditure, or metabolic changes allowed us to isolate several hundred genes whose specific single nucleotide polymorphisms (SNPs) only favor the development of obesity . In the vast majority of cases of familial obesity, the reproduction of unfavorable eating habits and patterns of spending free time play a greater role than genetic factors. In addition, family dysfunction (too little or too much parental care, inability to show affection, and excessive parental demands) is an important risk factor for the development of eating disorders . The diagnosis of monogenic obesity should be considered in patients with a history of third-degree obesity, which begins in early childhood, especially if the implementation of all adequate treatment methods does not achieve a therapeutic effect. 3.2.3. Emotional Eating and Eating Disturbances (Binge Eating Syndrome and Night Eating Syndrome) In recent years, numerous studies have focused on the psychological basis of the development of obesity. The main strands of these studies concern personality traits and the dysfunction of the reward system . Not recognizing these disorders and not incorporating appropriate therapeutic methods may be the main cause of obesity treatment failures. The personality traits related to the risk of the development of emotional eating and eating disturbances included impulsivity (tendency to act rapidly without consideration of consequences), disinhibition, neuroticism, extraversion, sensation seeking, inattention, insufficiency inhibitory control, and the lack of cognitive flexibility . The biological aspect of food intake regulation (hunger and satiety) is associated with the response of neurotransmitters in the hypothalamus to hormonal signals from the digestive tract and adipose tissue. However, the second place affecting food intake and eating behavior is the reward system (the amygdala/hippocampus, insula, orbitofrontal cortex [OFC], and striatum). The dysfunction of the reward system, especially decreased dopamine secretion, is associated with feeling appetite, also named food craving (the need to eat for pleasure, not for hunger). Emotions play a significant role in triggering processes of motivation to seek reward, learning, and consolidation of eating behavior arise, while the cognitive control of eating behaviors is localized in the prefrontal cortex . Emotional Eating (EE) Emotional eating, formerly called stress eating is a noneffective strategy for dealing with emotions with food. Emotions cause stress in the body and activation of the hypothalamic-pituitary-adrenal axis. In turn, cortisol inhibits dopamine release in the reward system and slows down the inhibitory-control pathway . The COVID-19 pandemic worsened human mental health. Numerous studies have shown that many people cope with negative emotions with food . Thus, this cause of the development of obesity should be included in the diagnosis work-up. It should be noted that over time EE may worsen, and binge eating disorder may develop. All patients with obesity should be assessed for EE. In everyday clinical practice, the screening tool presented in should be used . Binge Eating Disorder (BED) In accordance with the Diagnostic and Statistical Manual of Mental Disorders (DSM), the fifth edition, BED should be diagnosed if consuming unusually large amounts of food in a short time with a loss of control that occurs at least once per week for 3 months. In addition, at least three of the following must be present: consuming food more rapidly than normal, eating until uncomfortably full, consuming large amounts of food without the feeling of hunger, eating alone to avoid shame or feeling disgusted with oneself, and depression, or guilt after gluttony without any regular compensatory behavior . Of note, BED may be a primary cause of the development of obesity and also may secondarily develop in people suffering from obesity as a result of using numerous short-term diets . The extreme form of BED is food addiction. Symptoms of food addiction include a compulsion to eat food, lack of control over food intake, physiological withdrawal symptoms, development of tolerance (i.e., the need to eat more and more food), neglecting other activities that may give pleasure, denying that there is a problem with eating control, and continuing behaviors related to food intake despite knowing that they are harmful. Night Eating Syndrome (NES) NES is diagnosed in subjects with recurrent episodes of excessive food consumption after dinner or eating after awakening from sleep and at the least three of the following: morning anorexia, a strong urge to eat between dinner and sleep and/or during the night, sleep onset and/or maintenance insomnia, frequently depressed mood or mood worsening in the evening, and a belief that one cannot go back to sleep without eating . All patients with obesity should be assessed for BED and NES . Both BED and NES often coexist with depression and anxiety. Depression and anxiety should be diagnosed in all patients with obesity based on Hospital Anxiety and Depression Scale (HADS). 3.2.4. Obesity Associated with Hormonal Disturbances Obesity can develop in the course of some endocrinopathies, including: Cushing’s syndrome, ACTH dependent (Cushing’s disease), and ACHT independent; Hypothyroidism in the course of primary or secondary thyroid dysfunction; Pituitary dysfunction in the form of multihormonal hypofunction of this gland, including growth hormone deficiency; Damage to the hypothalamus with the impaired secretion of hypothalamic neurohormones. The guidelines of the European Society of Endocrinology (ESE) from 2020 contain the current recommendations of the year regarding their diagnosis . Cushing’s Syndrome The prevalence of hypercortisolism in people with obesity is estimated at 0.9%. Routine testing for hypercortisolemia is not recommended except when suspected on clinical examination (blue-red skin stretch marks, bruising, or proximal muscle weakness) and resistant hypertension. Conducting laboratory hormonal tests is not recommended in patients with iatrogenic Cushing’s syndrome, especially those undergoing chronic glucocorticoid therapy. In patients with obesity, in whom bariatric surgery is planned, tests to exclude hypercortisolemia should be considered . Diagnostic tests: - Inhibition test with 1 mg dexamethasone; - Assessment of free cortisol concentration in a 24 h urine collection or late evening cortisol concentration in saliva; - If there is confirmed endogenous hypercortisolism, then measure ACTH levels and plan imaging tests . Hypothyroidism Overt hypothyroidism occurs in 14% of patients with obesity, subclinical hypothyroidism in another 14.6% of patients with obesity, and their frequency is significantly higher than in the general population (in Europe, overt hypothyroidism occurs with a frequency of 0.2–5.3%, and subclinical hypothyroidism is more often of 4–10%), the incidence of undiagnosed hypothyroidism is estimated at about 5%. The assessment of thyroid function is recommended in all patients with obesity . Diagnostic tests: - Serum TSH levels as part of tests performed in all people with obesity, regardless of the presence of symptoms suggesting thyroid dysfunction; - Free thyroxine (FT 4 ) and anti-thyroid peroxidase (anti-TPO) antibodies are recommended to be measured if elevated TSH is found. The reference ranges for TSH and FT 4 in patients with obesity are the same as in the general adult population . An ultrasound examination of the thyroid gland is recommended for a full assessment of the thyroid gland, although the ESE guidelines do not require routine thyroid ultrasound examination in obese patients if no abnormalities are found in the physical examination of the thyroid gland. Pituitary Dysfunction in the Form of Multihormonal Hypofunction of the Pituitary Gland, Including Growth Hormone (GH) Deficiency, and Rare Damage to the Hypothalamus with Impaired Secretion of Hypothalamic Neurohormones They occur most often after surgery or radiotherapy in the area of the hypothalamus and pituitary gland and may be caused by compression (tumors, craniopharyngioma, or metastases), ischemia, trauma, sarcoidosis, storage diseases (hemochromatosis and histiocytosis), autoimmunity (lymphocytic hypophysis) and infectious factors. Diagnostic tests: - Serum GH, FSH and LH, TSH, ACTH, and PRL levels; - Serum insulin-like growth factor type 1 (IGF-1), estradiol, testosterone, cortisol, fT 3 , and fT 4 levels; - Stimulating tests (with insulin, with arginine, with GH-RH, with LH-RH, and with CRH) . 3.2.5. Medication-Related Obesity Glucocorticoids Weight gain occurs in approximately 70% of patients treated with glucocorticoids, of which 20% exceed 10 kg. The effect of glucocorticoids on food intake is complex and includes both changes in the secretion of neurotransmitters responsible for the regulation of satiety and hunger in the hypothalamic nuclei, as well as neurotransmitters responsible for the hedonic aspect of food intake in the reward system . Hypoglycemic Drugs Hypoglycemic drugs promoting weight gain include insulin, insulin analogs, sulfonylureas, and thiazolidinediones. Weight gain during insulin and insulin-analogs use is dose-dependent, related to stimulation of food intake, episodes of hypoglycemia, and fluctuating glucose concentrations. Many patients eat not only when symptoms of hypoglycemia appear but also because they are afraid of their occurrence. Sulfonylureas stimulate the secretion of endogenous insulin, which may result in hypoglycemia and significant fluctuations in glucose levels, and, consequently, in increased food intake. Weight gain also occurs with thiazolidinediones and is dose-proportional. The weight gain effects of these drugs include fluid retention, increased storage of triglycerides in adipocytes, and enhanced adipogenesis. Interestingly, the accumulation of adipose tissue primarily occurs in the visceral deposit . Antihypertensive Drugs It has been known for many years that the use of beta-adrenergic antagonists (except carvedilol and nebivolol) causes weight gain in some people, and it is associated with genetic variants of the beta-adrenergic receptors. These drugs decrease energy expenditure—the basal metabolic rate by 4–9% and postprandial thermogenesis by 25%. They also inhibit the activity of hormone-sensitive lipase and, in consequence, lipolysis. In addition, one of the side effects may be weakness and fatigue and, in consequence, decreased physical activity . Psychotropic Medication Neuroleptics Up to 80% of patients on atypical neuroleptics gain 20% or more of their normal weight. These drugs increase food intake by affecting the reward and punishment systems and increasing appetite. This is the result of their antagonistic effect on dopaminergic type 2 (D2) and serotonin type 2A (5-HT2A) receptors. They can also significantly affect histamine (H1) receptors and, to a lesser extent, α 1 -adrenergic and serotonergic type 2C (5-HT2C) receptors. The fact that there are differences in the amount of weight gain when using different drugs of this class is related to their potency in blocking the activity of particular receptors. The risk of weight gain during neuroleptics use is presented in . Antidepressants It should be noted that in a quarter of patients using antidepressants, weight gain is observed. The risk factors of weight gain associated with antidepressant use include the type of medication, duration of pharmacotherapy, female sex, and overweight or obesity before initiation of the treatment. The risk of weight gain related to antidepressant use is presented in . Weight gain may also be experienced by patients with bipolar disorder, treated with lithium or valproate. Antiepileptic drugs Weight gain has been observed in 71% of those treated with valproic acid and 43% of those treated with carbamazepine. Weight gain occurred less frequently when treated with pregabalin and gabapentin. There was no change in body weight during treatment with lamotrigine, levetiracetam, and phenytoin. On the other hand, felbamate, topiramate, and zonisamide cause weight reduction through an unknown mechanism . The civilization development that has taken place in recent decades is conducive to the creation of a positive energy balance. The widespread use of motorization and automation in everyday life and professional work has reduced energy demand related to decreased physical activity. At the same time, the structure of consumed food unfavorably changed towards highly processed and low in dietary fiber. The manufacturer’s focus on lowering prices and mass production has significantly reduced the quality of food products and, at the same time, their energy density increased. In the Polish population, there is a growing trend in the consumption of fat and sugar. In the analysis of energy consumption in 167 countries conducted in 2018, Poles were ranked 10th . In addition, Polish research showed low fiber intake in the middle-aged Polish population (16 g per day, referring to the lowest recommended 25 g per day) . In recent years, due to the observed social, cultural, economic, and political changes, a concept of an obesogenic environment has emerged. That is, the living environment of the individual promotes and strengthens the behavior of the individual leading to a positive energy balance . In this context, attention should also be paid to factors shaping consumer behavior, such as food advertising aimed at children and parents. These advertisements often use a subliminal message indicating the health benefits of certain products or the promotion of sweets by sports celebrities. The dynamic of expenditures on advertising by fast food restaurants and companies producing sweets in Poland has an upward trend . Numerous studies have also shown the influence of the work environment on eating behavior. These factors include lack of conditions for regular meals, reduced physical activity, long working hours resulting in a heavy evening or night meal, shift work, disturbed sleep patterns or work-related stress and job dissatisfaction, and difficulties experienced in the workplace, including discrimination . These causes should be assessed on the basis of a carefully collected medical history. Mutations within a single gene or chromosomal changes involving several genes may be the cause of congenital disease syndromes, most often multi-symptom, in which severe obesity develops already in childhood. Almost 100 such syndromes have been described, although some of them do not yet have a name and are not genetically well characterized, and some of them can be found in the literature under several terms. At least 7% of non-syndromic early-onset severe obesity (NESO) has been shown to be the result of a single gene mutation. Most of these mutations concern genes encoding enzyme or receptor proteins involved in the regulation of the leptin–melanocortin pathway in the hypothalamus (e.g., LEP , LEPR , PC1/PC3/PCSK1 , POMC , and MC4R genes) or having a significant impact on the development of this part of the brain ( SIM1 , BDNF , and NTRK2 genes). Obesity, conditioned by polymorphisms of many genes, is the most common form of the disease. From a population point of view, significant mutations of single genes or chromosomal aberrations are only a margin of the problem. There is no single ‘obesity gene’ responsible for the development of this condition. The research on the so-called ‘candidate genes’ related to the regulation of hunger, satiety, development of adipose tissue, energy expenditure, or metabolic changes allowed us to isolate several hundred genes whose specific single nucleotide polymorphisms (SNPs) only favor the development of obesity . In the vast majority of cases of familial obesity, the reproduction of unfavorable eating habits and patterns of spending free time play a greater role than genetic factors. In addition, family dysfunction (too little or too much parental care, inability to show affection, and excessive parental demands) is an important risk factor for the development of eating disorders . The diagnosis of monogenic obesity should be considered in patients with a history of third-degree obesity, which begins in early childhood, especially if the implementation of all adequate treatment methods does not achieve a therapeutic effect. In recent years, numerous studies have focused on the psychological basis of the development of obesity. The main strands of these studies concern personality traits and the dysfunction of the reward system . Not recognizing these disorders and not incorporating appropriate therapeutic methods may be the main cause of obesity treatment failures. The personality traits related to the risk of the development of emotional eating and eating disturbances included impulsivity (tendency to act rapidly without consideration of consequences), disinhibition, neuroticism, extraversion, sensation seeking, inattention, insufficiency inhibitory control, and the lack of cognitive flexibility . The biological aspect of food intake regulation (hunger and satiety) is associated with the response of neurotransmitters in the hypothalamus to hormonal signals from the digestive tract and adipose tissue. However, the second place affecting food intake and eating behavior is the reward system (the amygdala/hippocampus, insula, orbitofrontal cortex [OFC], and striatum). The dysfunction of the reward system, especially decreased dopamine secretion, is associated with feeling appetite, also named food craving (the need to eat for pleasure, not for hunger). Emotions play a significant role in triggering processes of motivation to seek reward, learning, and consolidation of eating behavior arise, while the cognitive control of eating behaviors is localized in the prefrontal cortex . Emotional Eating (EE) Emotional eating, formerly called stress eating is a noneffective strategy for dealing with emotions with food. Emotions cause stress in the body and activation of the hypothalamic-pituitary-adrenal axis. In turn, cortisol inhibits dopamine release in the reward system and slows down the inhibitory-control pathway . The COVID-19 pandemic worsened human mental health. Numerous studies have shown that many people cope with negative emotions with food . Thus, this cause of the development of obesity should be included in the diagnosis work-up. It should be noted that over time EE may worsen, and binge eating disorder may develop. All patients with obesity should be assessed for EE. In everyday clinical practice, the screening tool presented in should be used . Binge Eating Disorder (BED) In accordance with the Diagnostic and Statistical Manual of Mental Disorders (DSM), the fifth edition, BED should be diagnosed if consuming unusually large amounts of food in a short time with a loss of control that occurs at least once per week for 3 months. In addition, at least three of the following must be present: consuming food more rapidly than normal, eating until uncomfortably full, consuming large amounts of food without the feeling of hunger, eating alone to avoid shame or feeling disgusted with oneself, and depression, or guilt after gluttony without any regular compensatory behavior . Of note, BED may be a primary cause of the development of obesity and also may secondarily develop in people suffering from obesity as a result of using numerous short-term diets . The extreme form of BED is food addiction. Symptoms of food addiction include a compulsion to eat food, lack of control over food intake, physiological withdrawal symptoms, development of tolerance (i.e., the need to eat more and more food), neglecting other activities that may give pleasure, denying that there is a problem with eating control, and continuing behaviors related to food intake despite knowing that they are harmful. Night Eating Syndrome (NES) NES is diagnosed in subjects with recurrent episodes of excessive food consumption after dinner or eating after awakening from sleep and at the least three of the following: morning anorexia, a strong urge to eat between dinner and sleep and/or during the night, sleep onset and/or maintenance insomnia, frequently depressed mood or mood worsening in the evening, and a belief that one cannot go back to sleep without eating . All patients with obesity should be assessed for BED and NES . Both BED and NES often coexist with depression and anxiety. Depression and anxiety should be diagnosed in all patients with obesity based on Hospital Anxiety and Depression Scale (HADS). Emotional eating, formerly called stress eating is a noneffective strategy for dealing with emotions with food. Emotions cause stress in the body and activation of the hypothalamic-pituitary-adrenal axis. In turn, cortisol inhibits dopamine release in the reward system and slows down the inhibitory-control pathway . The COVID-19 pandemic worsened human mental health. Numerous studies have shown that many people cope with negative emotions with food . Thus, this cause of the development of obesity should be included in the diagnosis work-up. It should be noted that over time EE may worsen, and binge eating disorder may develop. All patients with obesity should be assessed for EE. In everyday clinical practice, the screening tool presented in should be used . In accordance with the Diagnostic and Statistical Manual of Mental Disorders (DSM), the fifth edition, BED should be diagnosed if consuming unusually large amounts of food in a short time with a loss of control that occurs at least once per week for 3 months. In addition, at least three of the following must be present: consuming food more rapidly than normal, eating until uncomfortably full, consuming large amounts of food without the feeling of hunger, eating alone to avoid shame or feeling disgusted with oneself, and depression, or guilt after gluttony without any regular compensatory behavior . Of note, BED may be a primary cause of the development of obesity and also may secondarily develop in people suffering from obesity as a result of using numerous short-term diets . The extreme form of BED is food addiction. Symptoms of food addiction include a compulsion to eat food, lack of control over food intake, physiological withdrawal symptoms, development of tolerance (i.e., the need to eat more and more food), neglecting other activities that may give pleasure, denying that there is a problem with eating control, and continuing behaviors related to food intake despite knowing that they are harmful. NES is diagnosed in subjects with recurrent episodes of excessive food consumption after dinner or eating after awakening from sleep and at the least three of the following: morning anorexia, a strong urge to eat between dinner and sleep and/or during the night, sleep onset and/or maintenance insomnia, frequently depressed mood or mood worsening in the evening, and a belief that one cannot go back to sleep without eating . All patients with obesity should be assessed for BED and NES . Both BED and NES often coexist with depression and anxiety. Depression and anxiety should be diagnosed in all patients with obesity based on Hospital Anxiety and Depression Scale (HADS). Obesity can develop in the course of some endocrinopathies, including: Cushing’s syndrome, ACTH dependent (Cushing’s disease), and ACHT independent; Hypothyroidism in the course of primary or secondary thyroid dysfunction; Pituitary dysfunction in the form of multihormonal hypofunction of this gland, including growth hormone deficiency; Damage to the hypothalamus with the impaired secretion of hypothalamic neurohormones. The guidelines of the European Society of Endocrinology (ESE) from 2020 contain the current recommendations of the year regarding their diagnosis . Cushing’s Syndrome The prevalence of hypercortisolism in people with obesity is estimated at 0.9%. Routine testing for hypercortisolemia is not recommended except when suspected on clinical examination (blue-red skin stretch marks, bruising, or proximal muscle weakness) and resistant hypertension. Conducting laboratory hormonal tests is not recommended in patients with iatrogenic Cushing’s syndrome, especially those undergoing chronic glucocorticoid therapy. In patients with obesity, in whom bariatric surgery is planned, tests to exclude hypercortisolemia should be considered . Diagnostic tests: - Inhibition test with 1 mg dexamethasone; - Assessment of free cortisol concentration in a 24 h urine collection or late evening cortisol concentration in saliva; - If there is confirmed endogenous hypercortisolism, then measure ACTH levels and plan imaging tests . Hypothyroidism Overt hypothyroidism occurs in 14% of patients with obesity, subclinical hypothyroidism in another 14.6% of patients with obesity, and their frequency is significantly higher than in the general population (in Europe, overt hypothyroidism occurs with a frequency of 0.2–5.3%, and subclinical hypothyroidism is more often of 4–10%), the incidence of undiagnosed hypothyroidism is estimated at about 5%. The assessment of thyroid function is recommended in all patients with obesity . Diagnostic tests: - Serum TSH levels as part of tests performed in all people with obesity, regardless of the presence of symptoms suggesting thyroid dysfunction; - Free thyroxine (FT 4 ) and anti-thyroid peroxidase (anti-TPO) antibodies are recommended to be measured if elevated TSH is found. The reference ranges for TSH and FT 4 in patients with obesity are the same as in the general adult population . An ultrasound examination of the thyroid gland is recommended for a full assessment of the thyroid gland, although the ESE guidelines do not require routine thyroid ultrasound examination in obese patients if no abnormalities are found in the physical examination of the thyroid gland. Pituitary Dysfunction in the Form of Multihormonal Hypofunction of the Pituitary Gland, Including Growth Hormone (GH) Deficiency, and Rare Damage to the Hypothalamus with Impaired Secretion of Hypothalamic Neurohormones They occur most often after surgery or radiotherapy in the area of the hypothalamus and pituitary gland and may be caused by compression (tumors, craniopharyngioma, or metastases), ischemia, trauma, sarcoidosis, storage diseases (hemochromatosis and histiocytosis), autoimmunity (lymphocytic hypophysis) and infectious factors. Diagnostic tests: - Serum GH, FSH and LH, TSH, ACTH, and PRL levels; - Serum insulin-like growth factor type 1 (IGF-1), estradiol, testosterone, cortisol, fT 3 , and fT 4 levels; - Stimulating tests (with insulin, with arginine, with GH-RH, with LH-RH, and with CRH) . The prevalence of hypercortisolism in people with obesity is estimated at 0.9%. Routine testing for hypercortisolemia is not recommended except when suspected on clinical examination (blue-red skin stretch marks, bruising, or proximal muscle weakness) and resistant hypertension. Conducting laboratory hormonal tests is not recommended in patients with iatrogenic Cushing’s syndrome, especially those undergoing chronic glucocorticoid therapy. In patients with obesity, in whom bariatric surgery is planned, tests to exclude hypercortisolemia should be considered . Diagnostic tests: - Inhibition test with 1 mg dexamethasone; - Assessment of free cortisol concentration in a 24 h urine collection or late evening cortisol concentration in saliva; - If there is confirmed endogenous hypercortisolism, then measure ACTH levels and plan imaging tests . Overt hypothyroidism occurs in 14% of patients with obesity, subclinical hypothyroidism in another 14.6% of patients with obesity, and their frequency is significantly higher than in the general population (in Europe, overt hypothyroidism occurs with a frequency of 0.2–5.3%, and subclinical hypothyroidism is more often of 4–10%), the incidence of undiagnosed hypothyroidism is estimated at about 5%. The assessment of thyroid function is recommended in all patients with obesity . Diagnostic tests: - Serum TSH levels as part of tests performed in all people with obesity, regardless of the presence of symptoms suggesting thyroid dysfunction; - Free thyroxine (FT 4 ) and anti-thyroid peroxidase (anti-TPO) antibodies are recommended to be measured if elevated TSH is found. The reference ranges for TSH and FT 4 in patients with obesity are the same as in the general adult population . An ultrasound examination of the thyroid gland is recommended for a full assessment of the thyroid gland, although the ESE guidelines do not require routine thyroid ultrasound examination in obese patients if no abnormalities are found in the physical examination of the thyroid gland. They occur most often after surgery or radiotherapy in the area of the hypothalamus and pituitary gland and may be caused by compression (tumors, craniopharyngioma, or metastases), ischemia, trauma, sarcoidosis, storage diseases (hemochromatosis and histiocytosis), autoimmunity (lymphocytic hypophysis) and infectious factors. Diagnostic tests: - Serum GH, FSH and LH, TSH, ACTH, and PRL levels; - Serum insulin-like growth factor type 1 (IGF-1), estradiol, testosterone, cortisol, fT 3 , and fT 4 levels; - Stimulating tests (with insulin, with arginine, with GH-RH, with LH-RH, and with CRH) . Glucocorticoids Weight gain occurs in approximately 70% of patients treated with glucocorticoids, of which 20% exceed 10 kg. The effect of glucocorticoids on food intake is complex and includes both changes in the secretion of neurotransmitters responsible for the regulation of satiety and hunger in the hypothalamic nuclei, as well as neurotransmitters responsible for the hedonic aspect of food intake in the reward system . Hypoglycemic Drugs Hypoglycemic drugs promoting weight gain include insulin, insulin analogs, sulfonylureas, and thiazolidinediones. Weight gain during insulin and insulin-analogs use is dose-dependent, related to stimulation of food intake, episodes of hypoglycemia, and fluctuating glucose concentrations. Many patients eat not only when symptoms of hypoglycemia appear but also because they are afraid of their occurrence. Sulfonylureas stimulate the secretion of endogenous insulin, which may result in hypoglycemia and significant fluctuations in glucose levels, and, consequently, in increased food intake. Weight gain also occurs with thiazolidinediones and is dose-proportional. The weight gain effects of these drugs include fluid retention, increased storage of triglycerides in adipocytes, and enhanced adipogenesis. Interestingly, the accumulation of adipose tissue primarily occurs in the visceral deposit . Antihypertensive Drugs It has been known for many years that the use of beta-adrenergic antagonists (except carvedilol and nebivolol) causes weight gain in some people, and it is associated with genetic variants of the beta-adrenergic receptors. These drugs decrease energy expenditure—the basal metabolic rate by 4–9% and postprandial thermogenesis by 25%. They also inhibit the activity of hormone-sensitive lipase and, in consequence, lipolysis. In addition, one of the side effects may be weakness and fatigue and, in consequence, decreased physical activity . Psychotropic Medication Neuroleptics Up to 80% of patients on atypical neuroleptics gain 20% or more of their normal weight. These drugs increase food intake by affecting the reward and punishment systems and increasing appetite. This is the result of their antagonistic effect on dopaminergic type 2 (D2) and serotonin type 2A (5-HT2A) receptors. They can also significantly affect histamine (H1) receptors and, to a lesser extent, α 1 -adrenergic and serotonergic type 2C (5-HT2C) receptors. The fact that there are differences in the amount of weight gain when using different drugs of this class is related to their potency in blocking the activity of particular receptors. The risk of weight gain during neuroleptics use is presented in . Antidepressants It should be noted that in a quarter of patients using antidepressants, weight gain is observed. The risk factors of weight gain associated with antidepressant use include the type of medication, duration of pharmacotherapy, female sex, and overweight or obesity before initiation of the treatment. The risk of weight gain related to antidepressant use is presented in . Weight gain may also be experienced by patients with bipolar disorder, treated with lithium or valproate. Antiepileptic drugs Weight gain has been observed in 71% of those treated with valproic acid and 43% of those treated with carbamazepine. Weight gain occurred less frequently when treated with pregabalin and gabapentin. There was no change in body weight during treatment with lamotrigine, levetiracetam, and phenytoin. On the other hand, felbamate, topiramate, and zonisamide cause weight reduction through an unknown mechanism . Weight gain occurs in approximately 70% of patients treated with glucocorticoids, of which 20% exceed 10 kg. The effect of glucocorticoids on food intake is complex and includes both changes in the secretion of neurotransmitters responsible for the regulation of satiety and hunger in the hypothalamic nuclei, as well as neurotransmitters responsible for the hedonic aspect of food intake in the reward system . Hypoglycemic drugs promoting weight gain include insulin, insulin analogs, sulfonylureas, and thiazolidinediones. Weight gain during insulin and insulin-analogs use is dose-dependent, related to stimulation of food intake, episodes of hypoglycemia, and fluctuating glucose concentrations. Many patients eat not only when symptoms of hypoglycemia appear but also because they are afraid of their occurrence. Sulfonylureas stimulate the secretion of endogenous insulin, which may result in hypoglycemia and significant fluctuations in glucose levels, and, consequently, in increased food intake. Weight gain also occurs with thiazolidinediones and is dose-proportional. The weight gain effects of these drugs include fluid retention, increased storage of triglycerides in adipocytes, and enhanced adipogenesis. Interestingly, the accumulation of adipose tissue primarily occurs in the visceral deposit . It has been known for many years that the use of beta-adrenergic antagonists (except carvedilol and nebivolol) causes weight gain in some people, and it is associated with genetic variants of the beta-adrenergic receptors. These drugs decrease energy expenditure—the basal metabolic rate by 4–9% and postprandial thermogenesis by 25%. They also inhibit the activity of hormone-sensitive lipase and, in consequence, lipolysis. In addition, one of the side effects may be weakness and fatigue and, in consequence, decreased physical activity . Neuroleptics Up to 80% of patients on atypical neuroleptics gain 20% or more of their normal weight. These drugs increase food intake by affecting the reward and punishment systems and increasing appetite. This is the result of their antagonistic effect on dopaminergic type 2 (D2) and serotonin type 2A (5-HT2A) receptors. They can also significantly affect histamine (H1) receptors and, to a lesser extent, α 1 -adrenergic and serotonergic type 2C (5-HT2C) receptors. The fact that there are differences in the amount of weight gain when using different drugs of this class is related to their potency in blocking the activity of particular receptors. The risk of weight gain during neuroleptics use is presented in . Antidepressants It should be noted that in a quarter of patients using antidepressants, weight gain is observed. The risk factors of weight gain associated with antidepressant use include the type of medication, duration of pharmacotherapy, female sex, and overweight or obesity before initiation of the treatment. The risk of weight gain related to antidepressant use is presented in . Weight gain may also be experienced by patients with bipolar disorder, treated with lithium or valproate. Antiepileptic drugs Weight gain has been observed in 71% of those treated with valproic acid and 43% of those treated with carbamazepine. Weight gain occurred less frequently when treated with pregabalin and gabapentin. There was no change in body weight during treatment with lamotrigine, levetiracetam, and phenytoin. On the other hand, felbamate, topiramate, and zonisamide cause weight reduction through an unknown mechanism . Obesity is a chronic disease that can lead to disability. Musculoskeletal, cardiovascular, and mental diseases are the three most common reasons for people with obesity to enter a disability pension. Moreover, people with obesity are more likely to lose their jobs, retire more often, take sick leave more often, are less productive at work, and are more likely to be injured in the workplace. In addition, often, people with childhood obesity develop physical disabilities at a young age and do not enter the labor market at all. It was also shown that being obese at the age of 18 increased the risk of taking disability benefits by 35%. It has also been observed that an increase in BMI by 1 kg/m 2 increases the risk of physical disability by 5%. Factors that increase the risk of developing disability in patients with obesity are anxiety and depressive disorders . A systematic review of studies conducted in European countries showed that patients with obesity took about 10 days longer sick leave per year than those of normal weight. The risk of taking sick leave lasting from 2 weeks to 12 weeks was 34% higher among patients with obesity and longer than 3 months by 63% . Being overweight or obese predisposes to the development of numerous dangerous complications, including metabolic, mechanical, and other. 3.3.1. The Metabolic Complications of Obesity These complications developed as a result of excessive accumulation of visceral adipose tissue with local inflammation, adipokine secretion disturbances, and insulin resistance. The adipose tissue becomes inefficient in the field of energy storage and comes to ectopic accumulation of fat in the liver and skeletal muscle and the development of insulin resistance. Systemic inflammation, changes in adipokine secretion, insulin resistance, and hyperinsulinemia are the key links in the development of obesity complications in adults, such as: Nonalcoholic fatty liver disease (NAFLD), currently called metabolic-associated fatty liver disease (MAFLD); Pre-diabetes (impaired glycemia fasting [impaired fasting glucose (IFG)] and impaired glucose tolerance [impaired glucose tolerance (IGT)] and type 2 diabetes; Atherogenic dyslipidemia (decreased HDL-C, elevated TG, at frequent slight changes in TC and LDL-C concentrations); Cardiovascular diseases (hypertension, coronary artery disease, carotid atherosclerosis, and stroke); Obesity-induced glomerulopathy; Cancers (e.g., colon, breast, and endometrium); Hormonal disturbances that lead to infertility in women (functional hyperandrogenism and polycystic ovary syndrome [PCOS]) and men (hypogonadism). Non-Alcoholic Fatty Liver Disease (NAFLD)/Metabolic-Associated Fatty Liver Disease (MAFLD) The diagnostic criteria of NAFLD are hepatic steatosis > 5% and exclusion of secondary causes of liver disease, including ‘significant’ alcohol usage. While the diagnostic criterion for MAFLD formulated in the year 2020 by an expert group utilizing a two-stage Delphi consensus is hepatic steatosis > 5% and metabolic risk divers, such as type 2 diabetes and overweight/obesity by ethnic-specific BMI classifications. In people with normal weight for the diagnosis of MAFLD, hepatic steatosis > 5% and two of seven risk factors are needed, including waist circumference > 102 cm in Caucasian men and >88 cm in Caucasian women; blood pressure > 130/85 mmHg or hypotensive therapy; plasma triglycerides > 150 mg/dL or specific drug treatment; plasma HDL cholesterol < 40 mg/dL for men and <50 mg/dL for women or specific drug treatment; prediabetes (fasting glucose levels 100–125 mg/dL or 2 h post-load glucose levels 140–199 mg/dL or HbA1c 5.7–6.4%); homeostasis model assessment of insulin resistance score > 2.5; and plasma C -reactive protein levels (high sensitivity CRP) > 2 mg/L . In all patients with overweight or obesity, an ultrasound of the liver should be performed. In patients with normal weight, the risk factors should be assessed. MAFLD is a progressive process from steatosis by inflammation and fibrosis to cirrhosis or hepatocellular carcinoma. However, the leading causes of premature death among people with NAFLD are cardiovascular complications . The noninvasive method of fibrosis assessment is the FIB4 test (age, activity of ALT and AST, and the number of platelets) . In patients with MAFLD, the main method of treatment is the effective management of obesity. However, the weight reduction should be no more than 0.5 kg per week. Too fast weight loss induces the formation of lithogenic bile and may cause increased fatty liver . There is no safe amount of alcohol for MAFLD . If indicated, pharmacotherapy appropriate to the severity of carbohydrate and lipid metabolism disturbances caused by MAFLD should be used . The use of ursodeoxycholic acid (UDCA) in a dose of 10–15 mg/kg/day should be considered . Prediabetes and Type 2 Diabetes Prediabetes includes impaired fasting glucose (IFG) related to fatty liver and its insulin resistance and impaired glucose tolerance (IGT) related to muscle fat and its insulin resistance. Disturbances associated with IFG result in an increase in hepatic glucose production and fasting hyperglycemia, but during activity, its plasma concentration gradually decreases as it is used as energy by muscles that remain insulin sensitive. In isolated IGT, on the other hand, muscle insulin resistance and a defect in the second phase of insulin secretion result in long-term hyperglycemia. In addition, the post-prandial release of glucagon-like peptide-1 is impaired, resulting in decreased insulin secretion . It is estimated that about 70% of people with prediabetes will develop type 2 diabetes in the future if obesity is not effectively treated. Prediabetes is an early stage in the development of type 2 diabetes. The progression of carbohydrate metabolism disorders towards diabetes is associated with impaired compensatory insulin secretion due to increased apoptosis of pancreatic islet β cells, which is facilitated by both impaired GLP-1 secretion and increased release of pro-inflammatory cytokines and leptin by visceral adipose tissue, as well as increased glucagon secretion and increased hepatic glucose synthesis as well as progressive changes in skeletal muscle metabolism . All patients with overweight, obesity, or visceral obesity should be screened for IFG by testing fasting blood glucose at least 12 h after the last meal. Patients with fasting glucose levels 100–125 mg/dL should have an oral glucose tolerance test of 75 mg. Diabetes should be diagnosed based on the criteria of the Polish Diabetology Society . The main method of treatment in patients with prediabetes and type 2 diabetes is the effective management of obesity. Metformin is recommended in patients with prediabetes as first-line therapy in most patients with type 2 diabetes. Other classes of hypoglycemic agents are useful in combination with metformin or when metformin is a contraindication or not tolerated. Their selection should base on the balance between the efficacy and side effect profile. All patients with type 2 diabetes and established or subclinical cardiovascular disease should be treated with the GLP-1 RA class or SGLT2i class . Atherogenic Dyslipidemia Atherogenic dyslipidemia is characterized by elevated serum triglyceride levels of at least 150 mg/dL (~1.7 mmol/L), elevated serum levels of very low-density lipoprotein (VLDL)-rich triglycerides, and decreased HDL cholesterol in men below 40 mg/dL (~1 mmol/L) and in women below 45 mg/dL (~1.2 mmol/L). Serum LDL may be normal or elevated, with an increased percentage of oxidized particles (oxLDL). These abnormalities are the results of fatty liver and increased triglycerides and VLDL production, as well as decreased HDL cholesterol synthesis . Atherogenic dyslipidemia is associated with a residual risk of developing coronary heart disease in patients with serum LDL levels of 70 mg/dL or less, to a similar or greater extent than in the overall group . The diagnosis of atherogenic dyslipidemia is based on the assessment of the lipid profile. Measurement of lipid profile should be performed in all people over 40 years and in all younger persons with factors of cardiovascular risk, including obesity and MAFLD . The main method of treatment of atherogenic dyslipidemia is the management of obesity. In addition, statins in combination with fibrates or omega-3 fatty acids should be used . Arterial Hypertension Obesity is a major risk factor for developing arterial hypertension. The links to the pathogenesis of obesity-induced hypertension are complex, but each of them is based on excess visceral adipose tissue. These include inflammation and endocrine dysfunction of adipose tissue, insulin resistance, endothelial dysfunction, increased sympathetic nervous system activity, activation of the renin–angiotensin–aldosterone system, dysfunction of the natriuretic peptide system, and a rare development of obesity-related glomerulopathy. The effect of these changes is increased cardiac output, peripheral vasoconstriction, and impaired pressure natriuresis (water and sodium retention and increased blood volume) . The diagnosis of arterial hypertension should not be based on a single blood pressure measurement taken during a single visit. Exceptions are rare situations in which blood pressure is significantly elevated (grade 3 arterial hypertension) or if there is clear evidence of complications of arterial hypertension (e.g., left ventricular hypertrophy, hypertensive retinopathy with effusions and petechiae, or kidney damage). In people with mean blood pressure values below 180/110 mmHg, arterial hypertension should be diagnosed on the basis of at least two blood pressure measurements taken during at least two separate visits. It should be noted that the basis for the diagnosis and treatment of arterial hypertension is still measurements made in the doctor’s office. However, arterial hypertension can also be diagnosed based on out-of-office measurements, i.e., ambulatory blood pressure monitoring (ABPM) and home measurements. In most patients, blood pressure should be measured using a standard arm cuff (width 12–13 cm, length 35 cm); if the patient’s arm circumference is >32 cm, a larger cuff should be used. At least 30 min before the measurement, the patient should refrain from consuming coffee, smoking cigarettes, and taking other stimulants. The measurement should be performed after at least five minutes of rest in a sitting position with the back supported in a quiet room with maintained thermal comfort. To determine the value of blood pressure, calculate the average of the last two measurements; at least three times, pressure measurements should be taken as a standard performed during the same visit at 1–2 min intervals. If blood pressure varies between measurements (>10 mmHg), additional measurements should be taken. At the initial assessment, all patients should undergo an orthostatic test, taking blood pressure measurements at 1 and 3 min after the change from sitting to standing position . The therapeutic goals in patients with obesity under 65 years are blood pressure values 120–129/70–79 mmHg, in patients aged 65–80 years 130–139/70–79, and in over 80 years 130–150/70–79 . An important part of arterial hypertension management is weight reduction. Combined pharmacotherapy is recommended in obese patients. Combinations of an angiotensin-converting enzyme inhibitor (ACE-I) or angiotensin receptor blocker (ARB) with a diuretic or calcium channel blocker (CCB) should be used as first-line drugs. In the second step, if the therapeutic goal is not achieved, triple therapy is preferred (a combination of ACE-I or sartan with a CCB and a diuretic, with separate therapy when there are indications for treatment with β-blockers). In the third step, a fourth drug should be added . Obesity-Related Glomerulopathy (ORG) In patients with obesity, there is an increase in renal blood flow and glomerular filtration, resulting in dilation of the afferent glomerular arterioles. The links of ORG pathogenesis are glomerular hyperfiltration, insulin resistance and hyperinsulinemia, hyperleptinemia, reduced anti-inflammatory effect of adiponectin, and chronic inflammation. Hyperleptinemia results in an increased secretion of fibroblast growth factor β (TGF-β), which stimulates the proliferation of endothelial and mesangial cells and the overproduction of extracellular matrix. Hyperinsulinemia not only stimulates myocyte proliferation in the media of the arteries but also promotes glomerulosclerosis by stimulating collagen synthesis. An additional factor involved in the pathogenesis of ORG is dyslipidemia. Characteristic symptoms of ORG are proteinuria and gradual impairment of renal excretion. ORG is a progressive nephropathy, and the rate of its progression depends on the occurrence of complications of obesity, such as arterial hypertension and type 2 diabetes . ORG is a rarely diagnosed entity based on kidney biopsy in patients with high-range proteinuria. The Main Hormonal Disturbances Growth hormone (GH) deficiency A decrease in GH and IGF-1 levels may be considered as obesity complication in patients without a pituitary disease. The routine determination of GH and IGF-1 in patients with obesity is not recommended . Hypogonadism in Men It occurs in 32.7% (up to 45%) of men with obesity. In all men with obesity, an assessment of symptoms of hypogonadism (decreased libido, erectile dysfunction, infertility, muscle weakness, gynecomastia, gynoid type of fat distribution, and androgenic hair loss) should be conducted. Hormonal work-up in men without these symptoms is not recommended. Diagnostic tests: - Serum concentrations of total and free testosterone, sex hormone-binding globulin (SHBG), FSH, LH, and PRL. The reference ranges for serum testosterone levels in men with obesity are age-specific. Hypogonadism is diagnosed in men with serum testosterone levels ≤ 11 nmol/L (3.2 ng/mL) with the presence of symptoms . Functional hyperandrogenism in women and polycystic ovary syndrome (PCOS) It occurs in 9.1–29% of women with obesity. The diagnostic is recommended only in women with menstrual disturbances, chronic anovulation, infertility, or/and symptoms of androgenization (hirsutism, androgenetic alopecia, or acne). Diagnostic tests: - Serum concentrations of FSH, LH, PRL, estradiol, total testosterone, and SHBG (between 3–5 days of the menstrual cycle); - Concentrations of androstenedione, 17-hydroxyprogesterone, and progesterone (depending on individual indications). Moreover, an ultrasound examination of the ovaries and determination of plasma glucose concentration is recommended . 3.3.2. Diseases Caused by Mechanical Consequences of Excessive Accumulation Visceral Fat Gastroesophageal Reflux (GERD) Patients with obesity report numerous symptoms related to the function of the esophagus and stomach, including difficulties in swallowing, pain while eating or pain after eating, a feeling of fullness and retention in the stomach, heartburn, or regurgitation, which paradoxically does not translate into weight loss. The factors contributing to the occurrence of these disorders in patients with obesity include increased intra-abdominal pressure and high support of the diaphragm as a result of the accumulation of visceral fat. Disorders are also favored by anatomical and functional abnormalities of the esophagus and stomach, causing abnormal esophageal motility and, thus, esophageal clearance, i.e., the ability to clean the esophagus from food residues or regurgitated contents, lowering the pressure of the lower esophageal sphincter, transient lower esophageal sphincter relaxation, or hiatal hernia . All patients with overweight or obesity should be evaluated for symptoms of GERD, and in patients with these symptoms and if treatment fails to control symptoms, an endoscopy should be performed . The treatment of GERD in patients with overweight or obesity includes at least 10% weight loss and the use of a proton pump inhibitor . Obesity Hypoventilation Syndrome (OHS) OHS is defined as the occurrence of symptoms of hypoventilation in patients with obesity when all other potential causes of hypoventilation have been excluded . OHS is a result of hypoxemia observed during physiological sleep deepens and hypercapnia increases to pathological values that colloquially meet the definition of respiratory failure, i.e., a state in which the partial pressure of oxygen in arterial blood falls below 60 mmHg (PaO 2 < 60 mmHg), or an increase in the PCO 2 ≥ 45 mmHg. The primary pathomechanism of hypercapnia is hypoventilation. However, it should be emphasized that hypercapnia, in the case of pure, untreated OHS, is always accompanied by hypoxemia (type 2 respiratory failure). Compensation for respiratory acidosis is renal production of bicarbonates (HCO 3− ). The specificity of respiratory failure in the course of OHS is the fact that it begins to insidiously appear at night, especially during REM sleep, then appears and persists during NREM sleep, to finally become also consolidated during the day. In more advanced cases, respiratory disturbances occur around the clock and are no longer compensated by daily hyperventilation . Clinical symptoms of OHS include impaired concentration, excessive daytime sleepiness, decreased exercise tolerance, and morning headaches . All patients with obesity, especially II- and III-grade, should be evaluated for OHS. Obesity management is an essential element of therapy. Sleep Apnea Syndrome (OSA) OSA is the result of sleep-related decreased airflow and oxygenation. The increase in body weight significantly increased the risk of developing OSA. Neck circumferences above 40.6 cm in women and 43.2 cm in men are associated with an increased risk of OSA . Symptoms include loud snoring, interruptions (apneic or hypopnea pauses) in breathing, and sleep-cycle fragmentation that, in turn, produce daytime fatigue, morning headache, lack of concentration, erectile dysfunction, and a general decrease in quality of life. In patients with these symptoms, polysomnography should be considered. Management of obesity is an essential part of therapy . 3.3.3. Mechanical Damage Caused by Excessive Load Osteoarthritis Numerous studies indicate an indisputable relationship between obesity and the development of knee osteoarthritis. A correlation was found between the diagnosis of obesity and the development of various deformities of the knee, which is believed to be a mechanical factor in the development of obesity-dependent gonarthrosis . Obese people adapt to their body weight by walking more slowly with their feet wider apart. They experience greater loads affecting the joints of the lower limbs, which predispose them to damage. Obesity is associated with structural disorders as well as impaired gait function, flattening of the arches of the foot, and excessive pronation in the ankle joint. When walking, there is an increase in the mobility of the rear foot, and this causes forefoot abduction to a greater extent than in a normal-weight person. Being overweight leads to increased pressure on the loaded joints. Postural instability leading to falls was found in people diagnosed with III-degree obesity . Obesity is a significant risk factor for pain in the neck, shoulder, elbow, wrist, and hand. Obesity in professionally active individuals predisposes to the development of tendinitis in the upper limbs. Numerous studies indicate the risk of developing ulnar nerve groove syndrome or carpal tunnel syndrome in obese patients, especially those who perform repetitive activities during their professional activity. Obesity significantly increases the risk of rotator cuff tendinitis. Obesity is also a risk factor for greater trochanteric bursitis, a common cause of lateral hip pain in middle-aged and older adults . Spinal pain syndromes caused by degenerative disc disease, stenosis of the spinal canal, and diseases of the intervertebral joints are very common problems in society, causing significant morbidity. This generates significant consequences for work efficiency and utilization of health services. The relationship between obesity and the described diseases is ambiguous. Some studies evaluating this issue find no evidence of a link between obesity and low back pain. However, compared with people with normal weight, obese patients more often suffer from radicular pain and present neurological symptoms . Screening for symptoms and physical examination for osteoarthritis should be performed in all patients with overweight and obesity. Obesity management is an essential part of osteoarthritis treatment . Chronic Venous Disease Epidemiological studies have shown that obesity is the risk factor for varicose veins in both sexes. It has been suggested that the main mechanism of impairing venous function, particularly venous return, and possibly increasing the rate of reflux in patients with obesity is the high pressure in the abdomen . 3.3.4. Other Cholelithiasis The risk of occurrence of cholesterol gallstone formation and symptomatic cholelithiasis increases significantly in patients with obesity and is augmented by weight loss, especially if it is fast. Approximately one-third of stones are symptomatic. The incidence of new gallstone formation is 10–12% after 8–16 weeks of application of a low-calorie diet and above 30% in the first 18 months after gastric bypass surgery. The higher risk of gallstone formation has also been observed in clinical trials that assessed the efficacy and safety of GLP-1 analogs. The additional risk factors for gallstone formation during weight loss include loss of more than 25% of the initial body weight, rate of weight loss above 1.5 kg per week, a very low-calorie diet containing little or no fat, and periods of absolute fasting. Cholelithiasis may be prevented by treatment with ursodeoxycholic acid 500–600 mg per day during the first 6 months of weight loss . Stress Urinary Incontinence Obesity is a major risk factor for urinary incontinence in women, and its frequency and severity increase with an increase in BMI values and duration of obesity. Screening for urinary incontinence should be performed in all women with overweight or obesity . Asthma Asthma symptoms and severity are associated with increased proinflammatory cytokines and adipokines related to obesity. Numerous studies have shown improvement in forced vital capacity after an average 7.5% weight reduction in patients with obesity and asthma. Medical history, symptomatology, and spirometry should be considered in all patients with overweight or obesity with an increased risk of asthma and reactive airway disease . Depression and Anxiety Depression and bipolar disorder (BD) It has been shown that 30–50% of people seeking treatment for obesity had a history of depression or anxiety. The occurrence of depression symptoms in young women is an important risk factor for the development of obesity later in life. Higher BMI values than in the general population have already been observed in adolescents with depression. The association between depression and obesity seems bi-directional. The classic symptom of depression is loss of appetite and weight; however, when mood improves during the treatment, appetite and weight increase. Of note, in patients with obesity, more frequent atypical depression is observed. People with this type of depression deal with negative emotions with food. Food stimulates the release of dopamine in the reward system and temporarily improves mood. In this mechanism, depression may be the cause of the development of eating disorders. On the other hand, obesity may be a cause of the development of depression due to low self-esteem, discrimination, stigmatization, and social exclusion. Therapy in patients with obesity, especially for bipolar disorder, is often less effective than in patients with normal body weight. Some studies have shown that bipolar disorder in patients with obesity is associated with a greater degree of disability, including impairment of memory, concentration, and attention, as well as a greater relapse rate and a more severe course of the disease . The consequences of the coexistence of depression and obesity included worse patient–doctor cooperation, avoidance, and social withdrawal, decreased quality of life, greater severity of depression, greater risk of disability and job loss, suicidal thoughts, and attempts. In patients with obesity, anxiety disorders such as panic attacks and agoraphobia (fear and avoidance of being out in the open and in public places) is twice as common as in normal-weight people. All patients with obesity should be screened for symptoms of depression and anxiety in the GP’s practice using the Hospital Anxiety and Depression Scale. Body weight and metabolic parameters should be monitored in all patients treated for psychiatric diseases. The family doctor should stay in touch with the psychiatrist and undertake joint actions aimed at the effective treatment of mental illnesses and limiting its consequences for physical health. These complications developed as a result of excessive accumulation of visceral adipose tissue with local inflammation, adipokine secretion disturbances, and insulin resistance. The adipose tissue becomes inefficient in the field of energy storage and comes to ectopic accumulation of fat in the liver and skeletal muscle and the development of insulin resistance. Systemic inflammation, changes in adipokine secretion, insulin resistance, and hyperinsulinemia are the key links in the development of obesity complications in adults, such as: Nonalcoholic fatty liver disease (NAFLD), currently called metabolic-associated fatty liver disease (MAFLD); Pre-diabetes (impaired glycemia fasting [impaired fasting glucose (IFG)] and impaired glucose tolerance [impaired glucose tolerance (IGT)] and type 2 diabetes; Atherogenic dyslipidemia (decreased HDL-C, elevated TG, at frequent slight changes in TC and LDL-C concentrations); Cardiovascular diseases (hypertension, coronary artery disease, carotid atherosclerosis, and stroke); Obesity-induced glomerulopathy; Cancers (e.g., colon, breast, and endometrium); Hormonal disturbances that lead to infertility in women (functional hyperandrogenism and polycystic ovary syndrome [PCOS]) and men (hypogonadism). Non-Alcoholic Fatty Liver Disease (NAFLD)/Metabolic-Associated Fatty Liver Disease (MAFLD) The diagnostic criteria of NAFLD are hepatic steatosis > 5% and exclusion of secondary causes of liver disease, including ‘significant’ alcohol usage. While the diagnostic criterion for MAFLD formulated in the year 2020 by an expert group utilizing a two-stage Delphi consensus is hepatic steatosis > 5% and metabolic risk divers, such as type 2 diabetes and overweight/obesity by ethnic-specific BMI classifications. In people with normal weight for the diagnosis of MAFLD, hepatic steatosis > 5% and two of seven risk factors are needed, including waist circumference > 102 cm in Caucasian men and >88 cm in Caucasian women; blood pressure > 130/85 mmHg or hypotensive therapy; plasma triglycerides > 150 mg/dL or specific drug treatment; plasma HDL cholesterol < 40 mg/dL for men and <50 mg/dL for women or specific drug treatment; prediabetes (fasting glucose levels 100–125 mg/dL or 2 h post-load glucose levels 140–199 mg/dL or HbA1c 5.7–6.4%); homeostasis model assessment of insulin resistance score > 2.5; and plasma C -reactive protein levels (high sensitivity CRP) > 2 mg/L . In all patients with overweight or obesity, an ultrasound of the liver should be performed. In patients with normal weight, the risk factors should be assessed. MAFLD is a progressive process from steatosis by inflammation and fibrosis to cirrhosis or hepatocellular carcinoma. However, the leading causes of premature death among people with NAFLD are cardiovascular complications . The noninvasive method of fibrosis assessment is the FIB4 test (age, activity of ALT and AST, and the number of platelets) . In patients with MAFLD, the main method of treatment is the effective management of obesity. However, the weight reduction should be no more than 0.5 kg per week. Too fast weight loss induces the formation of lithogenic bile and may cause increased fatty liver . There is no safe amount of alcohol for MAFLD . If indicated, pharmacotherapy appropriate to the severity of carbohydrate and lipid metabolism disturbances caused by MAFLD should be used . The use of ursodeoxycholic acid (UDCA) in a dose of 10–15 mg/kg/day should be considered . Prediabetes and Type 2 Diabetes Prediabetes includes impaired fasting glucose (IFG) related to fatty liver and its insulin resistance and impaired glucose tolerance (IGT) related to muscle fat and its insulin resistance. Disturbances associated with IFG result in an increase in hepatic glucose production and fasting hyperglycemia, but during activity, its plasma concentration gradually decreases as it is used as energy by muscles that remain insulin sensitive. In isolated IGT, on the other hand, muscle insulin resistance and a defect in the second phase of insulin secretion result in long-term hyperglycemia. In addition, the post-prandial release of glucagon-like peptide-1 is impaired, resulting in decreased insulin secretion . It is estimated that about 70% of people with prediabetes will develop type 2 diabetes in the future if obesity is not effectively treated. Prediabetes is an early stage in the development of type 2 diabetes. The progression of carbohydrate metabolism disorders towards diabetes is associated with impaired compensatory insulin secretion due to increased apoptosis of pancreatic islet β cells, which is facilitated by both impaired GLP-1 secretion and increased release of pro-inflammatory cytokines and leptin by visceral adipose tissue, as well as increased glucagon secretion and increased hepatic glucose synthesis as well as progressive changes in skeletal muscle metabolism . All patients with overweight, obesity, or visceral obesity should be screened for IFG by testing fasting blood glucose at least 12 h after the last meal. Patients with fasting glucose levels 100–125 mg/dL should have an oral glucose tolerance test of 75 mg. Diabetes should be diagnosed based on the criteria of the Polish Diabetology Society . The main method of treatment in patients with prediabetes and type 2 diabetes is the effective management of obesity. Metformin is recommended in patients with prediabetes as first-line therapy in most patients with type 2 diabetes. Other classes of hypoglycemic agents are useful in combination with metformin or when metformin is a contraindication or not tolerated. Their selection should base on the balance between the efficacy and side effect profile. All patients with type 2 diabetes and established or subclinical cardiovascular disease should be treated with the GLP-1 RA class or SGLT2i class . Atherogenic Dyslipidemia Atherogenic dyslipidemia is characterized by elevated serum triglyceride levels of at least 150 mg/dL (~1.7 mmol/L), elevated serum levels of very low-density lipoprotein (VLDL)-rich triglycerides, and decreased HDL cholesterol in men below 40 mg/dL (~1 mmol/L) and in women below 45 mg/dL (~1.2 mmol/L). Serum LDL may be normal or elevated, with an increased percentage of oxidized particles (oxLDL). These abnormalities are the results of fatty liver and increased triglycerides and VLDL production, as well as decreased HDL cholesterol synthesis . Atherogenic dyslipidemia is associated with a residual risk of developing coronary heart disease in patients with serum LDL levels of 70 mg/dL or less, to a similar or greater extent than in the overall group . The diagnosis of atherogenic dyslipidemia is based on the assessment of the lipid profile. Measurement of lipid profile should be performed in all people over 40 years and in all younger persons with factors of cardiovascular risk, including obesity and MAFLD . The main method of treatment of atherogenic dyslipidemia is the management of obesity. In addition, statins in combination with fibrates or omega-3 fatty acids should be used . Arterial Hypertension Obesity is a major risk factor for developing arterial hypertension. The links to the pathogenesis of obesity-induced hypertension are complex, but each of them is based on excess visceral adipose tissue. These include inflammation and endocrine dysfunction of adipose tissue, insulin resistance, endothelial dysfunction, increased sympathetic nervous system activity, activation of the renin–angiotensin–aldosterone system, dysfunction of the natriuretic peptide system, and a rare development of obesity-related glomerulopathy. The effect of these changes is increased cardiac output, peripheral vasoconstriction, and impaired pressure natriuresis (water and sodium retention and increased blood volume) . The diagnosis of arterial hypertension should not be based on a single blood pressure measurement taken during a single visit. Exceptions are rare situations in which blood pressure is significantly elevated (grade 3 arterial hypertension) or if there is clear evidence of complications of arterial hypertension (e.g., left ventricular hypertrophy, hypertensive retinopathy with effusions and petechiae, or kidney damage). In people with mean blood pressure values below 180/110 mmHg, arterial hypertension should be diagnosed on the basis of at least two blood pressure measurements taken during at least two separate visits. It should be noted that the basis for the diagnosis and treatment of arterial hypertension is still measurements made in the doctor’s office. However, arterial hypertension can also be diagnosed based on out-of-office measurements, i.e., ambulatory blood pressure monitoring (ABPM) and home measurements. In most patients, blood pressure should be measured using a standard arm cuff (width 12–13 cm, length 35 cm); if the patient’s arm circumference is >32 cm, a larger cuff should be used. At least 30 min before the measurement, the patient should refrain from consuming coffee, smoking cigarettes, and taking other stimulants. The measurement should be performed after at least five minutes of rest in a sitting position with the back supported in a quiet room with maintained thermal comfort. To determine the value of blood pressure, calculate the average of the last two measurements; at least three times, pressure measurements should be taken as a standard performed during the same visit at 1–2 min intervals. If blood pressure varies between measurements (>10 mmHg), additional measurements should be taken. At the initial assessment, all patients should undergo an orthostatic test, taking blood pressure measurements at 1 and 3 min after the change from sitting to standing position . The therapeutic goals in patients with obesity under 65 years are blood pressure values 120–129/70–79 mmHg, in patients aged 65–80 years 130–139/70–79, and in over 80 years 130–150/70–79 . An important part of arterial hypertension management is weight reduction. Combined pharmacotherapy is recommended in obese patients. Combinations of an angiotensin-converting enzyme inhibitor (ACE-I) or angiotensin receptor blocker (ARB) with a diuretic or calcium channel blocker (CCB) should be used as first-line drugs. In the second step, if the therapeutic goal is not achieved, triple therapy is preferred (a combination of ACE-I or sartan with a CCB and a diuretic, with separate therapy when there are indications for treatment with β-blockers). In the third step, a fourth drug should be added . Obesity-Related Glomerulopathy (ORG) In patients with obesity, there is an increase in renal blood flow and glomerular filtration, resulting in dilation of the afferent glomerular arterioles. The links of ORG pathogenesis are glomerular hyperfiltration, insulin resistance and hyperinsulinemia, hyperleptinemia, reduced anti-inflammatory effect of adiponectin, and chronic inflammation. Hyperleptinemia results in an increased secretion of fibroblast growth factor β (TGF-β), which stimulates the proliferation of endothelial and mesangial cells and the overproduction of extracellular matrix. Hyperinsulinemia not only stimulates myocyte proliferation in the media of the arteries but also promotes glomerulosclerosis by stimulating collagen synthesis. An additional factor involved in the pathogenesis of ORG is dyslipidemia. Characteristic symptoms of ORG are proteinuria and gradual impairment of renal excretion. ORG is a progressive nephropathy, and the rate of its progression depends on the occurrence of complications of obesity, such as arterial hypertension and type 2 diabetes . ORG is a rarely diagnosed entity based on kidney biopsy in patients with high-range proteinuria. The Main Hormonal Disturbances Growth hormone (GH) deficiency A decrease in GH and IGF-1 levels may be considered as obesity complication in patients without a pituitary disease. The routine determination of GH and IGF-1 in patients with obesity is not recommended . Hypogonadism in Men It occurs in 32.7% (up to 45%) of men with obesity. In all men with obesity, an assessment of symptoms of hypogonadism (decreased libido, erectile dysfunction, infertility, muscle weakness, gynecomastia, gynoid type of fat distribution, and androgenic hair loss) should be conducted. Hormonal work-up in men without these symptoms is not recommended. Diagnostic tests: - Serum concentrations of total and free testosterone, sex hormone-binding globulin (SHBG), FSH, LH, and PRL. The reference ranges for serum testosterone levels in men with obesity are age-specific. Hypogonadism is diagnosed in men with serum testosterone levels ≤ 11 nmol/L (3.2 ng/mL) with the presence of symptoms . Functional hyperandrogenism in women and polycystic ovary syndrome (PCOS) It occurs in 9.1–29% of women with obesity. The diagnostic is recommended only in women with menstrual disturbances, chronic anovulation, infertility, or/and symptoms of androgenization (hirsutism, androgenetic alopecia, or acne). Diagnostic tests: - Serum concentrations of FSH, LH, PRL, estradiol, total testosterone, and SHBG (between 3–5 days of the menstrual cycle); - Concentrations of androstenedione, 17-hydroxyprogesterone, and progesterone (depending on individual indications). Moreover, an ultrasound examination of the ovaries and determination of plasma glucose concentration is recommended . The diagnostic criteria of NAFLD are hepatic steatosis > 5% and exclusion of secondary causes of liver disease, including ‘significant’ alcohol usage. While the diagnostic criterion for MAFLD formulated in the year 2020 by an expert group utilizing a two-stage Delphi consensus is hepatic steatosis > 5% and metabolic risk divers, such as type 2 diabetes and overweight/obesity by ethnic-specific BMI classifications. In people with normal weight for the diagnosis of MAFLD, hepatic steatosis > 5% and two of seven risk factors are needed, including waist circumference > 102 cm in Caucasian men and >88 cm in Caucasian women; blood pressure > 130/85 mmHg or hypotensive therapy; plasma triglycerides > 150 mg/dL or specific drug treatment; plasma HDL cholesterol < 40 mg/dL for men and <50 mg/dL for women or specific drug treatment; prediabetes (fasting glucose levels 100–125 mg/dL or 2 h post-load glucose levels 140–199 mg/dL or HbA1c 5.7–6.4%); homeostasis model assessment of insulin resistance score > 2.5; and plasma C -reactive protein levels (high sensitivity CRP) > 2 mg/L . In all patients with overweight or obesity, an ultrasound of the liver should be performed. In patients with normal weight, the risk factors should be assessed. MAFLD is a progressive process from steatosis by inflammation and fibrosis to cirrhosis or hepatocellular carcinoma. However, the leading causes of premature death among people with NAFLD are cardiovascular complications . The noninvasive method of fibrosis assessment is the FIB4 test (age, activity of ALT and AST, and the number of platelets) . In patients with MAFLD, the main method of treatment is the effective management of obesity. However, the weight reduction should be no more than 0.5 kg per week. Too fast weight loss induces the formation of lithogenic bile and may cause increased fatty liver . There is no safe amount of alcohol for MAFLD . If indicated, pharmacotherapy appropriate to the severity of carbohydrate and lipid metabolism disturbances caused by MAFLD should be used . The use of ursodeoxycholic acid (UDCA) in a dose of 10–15 mg/kg/day should be considered . Prediabetes includes impaired fasting glucose (IFG) related to fatty liver and its insulin resistance and impaired glucose tolerance (IGT) related to muscle fat and its insulin resistance. Disturbances associated with IFG result in an increase in hepatic glucose production and fasting hyperglycemia, but during activity, its plasma concentration gradually decreases as it is used as energy by muscles that remain insulin sensitive. In isolated IGT, on the other hand, muscle insulin resistance and a defect in the second phase of insulin secretion result in long-term hyperglycemia. In addition, the post-prandial release of glucagon-like peptide-1 is impaired, resulting in decreased insulin secretion . It is estimated that about 70% of people with prediabetes will develop type 2 diabetes in the future if obesity is not effectively treated. Prediabetes is an early stage in the development of type 2 diabetes. The progression of carbohydrate metabolism disorders towards diabetes is associated with impaired compensatory insulin secretion due to increased apoptosis of pancreatic islet β cells, which is facilitated by both impaired GLP-1 secretion and increased release of pro-inflammatory cytokines and leptin by visceral adipose tissue, as well as increased glucagon secretion and increased hepatic glucose synthesis as well as progressive changes in skeletal muscle metabolism . All patients with overweight, obesity, or visceral obesity should be screened for IFG by testing fasting blood glucose at least 12 h after the last meal. Patients with fasting glucose levels 100–125 mg/dL should have an oral glucose tolerance test of 75 mg. Diabetes should be diagnosed based on the criteria of the Polish Diabetology Society . The main method of treatment in patients with prediabetes and type 2 diabetes is the effective management of obesity. Metformin is recommended in patients with prediabetes as first-line therapy in most patients with type 2 diabetes. Other classes of hypoglycemic agents are useful in combination with metformin or when metformin is a contraindication or not tolerated. Their selection should base on the balance between the efficacy and side effect profile. All patients with type 2 diabetes and established or subclinical cardiovascular disease should be treated with the GLP-1 RA class or SGLT2i class . Atherogenic dyslipidemia is characterized by elevated serum triglyceride levels of at least 150 mg/dL (~1.7 mmol/L), elevated serum levels of very low-density lipoprotein (VLDL)-rich triglycerides, and decreased HDL cholesterol in men below 40 mg/dL (~1 mmol/L) and in women below 45 mg/dL (~1.2 mmol/L). Serum LDL may be normal or elevated, with an increased percentage of oxidized particles (oxLDL). These abnormalities are the results of fatty liver and increased triglycerides and VLDL production, as well as decreased HDL cholesterol synthesis . Atherogenic dyslipidemia is associated with a residual risk of developing coronary heart disease in patients with serum LDL levels of 70 mg/dL or less, to a similar or greater extent than in the overall group . The diagnosis of atherogenic dyslipidemia is based on the assessment of the lipid profile. Measurement of lipid profile should be performed in all people over 40 years and in all younger persons with factors of cardiovascular risk, including obesity and MAFLD . The main method of treatment of atherogenic dyslipidemia is the management of obesity. In addition, statins in combination with fibrates or omega-3 fatty acids should be used . Obesity is a major risk factor for developing arterial hypertension. The links to the pathogenesis of obesity-induced hypertension are complex, but each of them is based on excess visceral adipose tissue. These include inflammation and endocrine dysfunction of adipose tissue, insulin resistance, endothelial dysfunction, increased sympathetic nervous system activity, activation of the renin–angiotensin–aldosterone system, dysfunction of the natriuretic peptide system, and a rare development of obesity-related glomerulopathy. The effect of these changes is increased cardiac output, peripheral vasoconstriction, and impaired pressure natriuresis (water and sodium retention and increased blood volume) . The diagnosis of arterial hypertension should not be based on a single blood pressure measurement taken during a single visit. Exceptions are rare situations in which blood pressure is significantly elevated (grade 3 arterial hypertension) or if there is clear evidence of complications of arterial hypertension (e.g., left ventricular hypertrophy, hypertensive retinopathy with effusions and petechiae, or kidney damage). In people with mean blood pressure values below 180/110 mmHg, arterial hypertension should be diagnosed on the basis of at least two blood pressure measurements taken during at least two separate visits. It should be noted that the basis for the diagnosis and treatment of arterial hypertension is still measurements made in the doctor’s office. However, arterial hypertension can also be diagnosed based on out-of-office measurements, i.e., ambulatory blood pressure monitoring (ABPM) and home measurements. In most patients, blood pressure should be measured using a standard arm cuff (width 12–13 cm, length 35 cm); if the patient’s arm circumference is >32 cm, a larger cuff should be used. At least 30 min before the measurement, the patient should refrain from consuming coffee, smoking cigarettes, and taking other stimulants. The measurement should be performed after at least five minutes of rest in a sitting position with the back supported in a quiet room with maintained thermal comfort. To determine the value of blood pressure, calculate the average of the last two measurements; at least three times, pressure measurements should be taken as a standard performed during the same visit at 1–2 min intervals. If blood pressure varies between measurements (>10 mmHg), additional measurements should be taken. At the initial assessment, all patients should undergo an orthostatic test, taking blood pressure measurements at 1 and 3 min after the change from sitting to standing position . The therapeutic goals in patients with obesity under 65 years are blood pressure values 120–129/70–79 mmHg, in patients aged 65–80 years 130–139/70–79, and in over 80 years 130–150/70–79 . An important part of arterial hypertension management is weight reduction. Combined pharmacotherapy is recommended in obese patients. Combinations of an angiotensin-converting enzyme inhibitor (ACE-I) or angiotensin receptor blocker (ARB) with a diuretic or calcium channel blocker (CCB) should be used as first-line drugs. In the second step, if the therapeutic goal is not achieved, triple therapy is preferred (a combination of ACE-I or sartan with a CCB and a diuretic, with separate therapy when there are indications for treatment with β-blockers). In the third step, a fourth drug should be added . In patients with obesity, there is an increase in renal blood flow and glomerular filtration, resulting in dilation of the afferent glomerular arterioles. The links of ORG pathogenesis are glomerular hyperfiltration, insulin resistance and hyperinsulinemia, hyperleptinemia, reduced anti-inflammatory effect of adiponectin, and chronic inflammation. Hyperleptinemia results in an increased secretion of fibroblast growth factor β (TGF-β), which stimulates the proliferation of endothelial and mesangial cells and the overproduction of extracellular matrix. Hyperinsulinemia not only stimulates myocyte proliferation in the media of the arteries but also promotes glomerulosclerosis by stimulating collagen synthesis. An additional factor involved in the pathogenesis of ORG is dyslipidemia. Characteristic symptoms of ORG are proteinuria and gradual impairment of renal excretion. ORG is a progressive nephropathy, and the rate of its progression depends on the occurrence of complications of obesity, such as arterial hypertension and type 2 diabetes . ORG is a rarely diagnosed entity based on kidney biopsy in patients with high-range proteinuria. Growth hormone (GH) deficiency A decrease in GH and IGF-1 levels may be considered as obesity complication in patients without a pituitary disease. The routine determination of GH and IGF-1 in patients with obesity is not recommended . Hypogonadism in Men It occurs in 32.7% (up to 45%) of men with obesity. In all men with obesity, an assessment of symptoms of hypogonadism (decreased libido, erectile dysfunction, infertility, muscle weakness, gynecomastia, gynoid type of fat distribution, and androgenic hair loss) should be conducted. Hormonal work-up in men without these symptoms is not recommended. Diagnostic tests: - Serum concentrations of total and free testosterone, sex hormone-binding globulin (SHBG), FSH, LH, and PRL. The reference ranges for serum testosterone levels in men with obesity are age-specific. Hypogonadism is diagnosed in men with serum testosterone levels ≤ 11 nmol/L (3.2 ng/mL) with the presence of symptoms . Functional hyperandrogenism in women and polycystic ovary syndrome (PCOS) It occurs in 9.1–29% of women with obesity. The diagnostic is recommended only in women with menstrual disturbances, chronic anovulation, infertility, or/and symptoms of androgenization (hirsutism, androgenetic alopecia, or acne). Diagnostic tests: - Serum concentrations of FSH, LH, PRL, estradiol, total testosterone, and SHBG (between 3–5 days of the menstrual cycle); - Concentrations of androstenedione, 17-hydroxyprogesterone, and progesterone (depending on individual indications). Moreover, an ultrasound examination of the ovaries and determination of plasma glucose concentration is recommended . Gastroesophageal Reflux (GERD) Patients with obesity report numerous symptoms related to the function of the esophagus and stomach, including difficulties in swallowing, pain while eating or pain after eating, a feeling of fullness and retention in the stomach, heartburn, or regurgitation, which paradoxically does not translate into weight loss. The factors contributing to the occurrence of these disorders in patients with obesity include increased intra-abdominal pressure and high support of the diaphragm as a result of the accumulation of visceral fat. Disorders are also favored by anatomical and functional abnormalities of the esophagus and stomach, causing abnormal esophageal motility and, thus, esophageal clearance, i.e., the ability to clean the esophagus from food residues or regurgitated contents, lowering the pressure of the lower esophageal sphincter, transient lower esophageal sphincter relaxation, or hiatal hernia . All patients with overweight or obesity should be evaluated for symptoms of GERD, and in patients with these symptoms and if treatment fails to control symptoms, an endoscopy should be performed . The treatment of GERD in patients with overweight or obesity includes at least 10% weight loss and the use of a proton pump inhibitor . Obesity Hypoventilation Syndrome (OHS) OHS is defined as the occurrence of symptoms of hypoventilation in patients with obesity when all other potential causes of hypoventilation have been excluded . OHS is a result of hypoxemia observed during physiological sleep deepens and hypercapnia increases to pathological values that colloquially meet the definition of respiratory failure, i.e., a state in which the partial pressure of oxygen in arterial blood falls below 60 mmHg (PaO 2 < 60 mmHg), or an increase in the PCO 2 ≥ 45 mmHg. The primary pathomechanism of hypercapnia is hypoventilation. However, it should be emphasized that hypercapnia, in the case of pure, untreated OHS, is always accompanied by hypoxemia (type 2 respiratory failure). Compensation for respiratory acidosis is renal production of bicarbonates (HCO 3− ). The specificity of respiratory failure in the course of OHS is the fact that it begins to insidiously appear at night, especially during REM sleep, then appears and persists during NREM sleep, to finally become also consolidated during the day. In more advanced cases, respiratory disturbances occur around the clock and are no longer compensated by daily hyperventilation . Clinical symptoms of OHS include impaired concentration, excessive daytime sleepiness, decreased exercise tolerance, and morning headaches . All patients with obesity, especially II- and III-grade, should be evaluated for OHS. Obesity management is an essential element of therapy. Sleep Apnea Syndrome (OSA) OSA is the result of sleep-related decreased airflow and oxygenation. The increase in body weight significantly increased the risk of developing OSA. Neck circumferences above 40.6 cm in women and 43.2 cm in men are associated with an increased risk of OSA . Symptoms include loud snoring, interruptions (apneic or hypopnea pauses) in breathing, and sleep-cycle fragmentation that, in turn, produce daytime fatigue, morning headache, lack of concentration, erectile dysfunction, and a general decrease in quality of life. In patients with these symptoms, polysomnography should be considered. Management of obesity is an essential part of therapy . Patients with obesity report numerous symptoms related to the function of the esophagus and stomach, including difficulties in swallowing, pain while eating or pain after eating, a feeling of fullness and retention in the stomach, heartburn, or regurgitation, which paradoxically does not translate into weight loss. The factors contributing to the occurrence of these disorders in patients with obesity include increased intra-abdominal pressure and high support of the diaphragm as a result of the accumulation of visceral fat. Disorders are also favored by anatomical and functional abnormalities of the esophagus and stomach, causing abnormal esophageal motility and, thus, esophageal clearance, i.e., the ability to clean the esophagus from food residues or regurgitated contents, lowering the pressure of the lower esophageal sphincter, transient lower esophageal sphincter relaxation, or hiatal hernia . All patients with overweight or obesity should be evaluated for symptoms of GERD, and in patients with these symptoms and if treatment fails to control symptoms, an endoscopy should be performed . The treatment of GERD in patients with overweight or obesity includes at least 10% weight loss and the use of a proton pump inhibitor . OHS is defined as the occurrence of symptoms of hypoventilation in patients with obesity when all other potential causes of hypoventilation have been excluded . OHS is a result of hypoxemia observed during physiological sleep deepens and hypercapnia increases to pathological values that colloquially meet the definition of respiratory failure, i.e., a state in which the partial pressure of oxygen in arterial blood falls below 60 mmHg (PaO 2 < 60 mmHg), or an increase in the PCO 2 ≥ 45 mmHg. The primary pathomechanism of hypercapnia is hypoventilation. However, it should be emphasized that hypercapnia, in the case of pure, untreated OHS, is always accompanied by hypoxemia (type 2 respiratory failure). Compensation for respiratory acidosis is renal production of bicarbonates (HCO 3− ). The specificity of respiratory failure in the course of OHS is the fact that it begins to insidiously appear at night, especially during REM sleep, then appears and persists during NREM sleep, to finally become also consolidated during the day. In more advanced cases, respiratory disturbances occur around the clock and are no longer compensated by daily hyperventilation . Clinical symptoms of OHS include impaired concentration, excessive daytime sleepiness, decreased exercise tolerance, and morning headaches . All patients with obesity, especially II- and III-grade, should be evaluated for OHS. Obesity management is an essential element of therapy. OSA is the result of sleep-related decreased airflow and oxygenation. The increase in body weight significantly increased the risk of developing OSA. Neck circumferences above 40.6 cm in women and 43.2 cm in men are associated with an increased risk of OSA . Symptoms include loud snoring, interruptions (apneic or hypopnea pauses) in breathing, and sleep-cycle fragmentation that, in turn, produce daytime fatigue, morning headache, lack of concentration, erectile dysfunction, and a general decrease in quality of life. In patients with these symptoms, polysomnography should be considered. Management of obesity is an essential part of therapy . Osteoarthritis Numerous studies indicate an indisputable relationship between obesity and the development of knee osteoarthritis. A correlation was found between the diagnosis of obesity and the development of various deformities of the knee, which is believed to be a mechanical factor in the development of obesity-dependent gonarthrosis . Obese people adapt to their body weight by walking more slowly with their feet wider apart. They experience greater loads affecting the joints of the lower limbs, which predispose them to damage. Obesity is associated with structural disorders as well as impaired gait function, flattening of the arches of the foot, and excessive pronation in the ankle joint. When walking, there is an increase in the mobility of the rear foot, and this causes forefoot abduction to a greater extent than in a normal-weight person. Being overweight leads to increased pressure on the loaded joints. Postural instability leading to falls was found in people diagnosed with III-degree obesity . Obesity is a significant risk factor for pain in the neck, shoulder, elbow, wrist, and hand. Obesity in professionally active individuals predisposes to the development of tendinitis in the upper limbs. Numerous studies indicate the risk of developing ulnar nerve groove syndrome or carpal tunnel syndrome in obese patients, especially those who perform repetitive activities during their professional activity. Obesity significantly increases the risk of rotator cuff tendinitis. Obesity is also a risk factor for greater trochanteric bursitis, a common cause of lateral hip pain in middle-aged and older adults . Spinal pain syndromes caused by degenerative disc disease, stenosis of the spinal canal, and diseases of the intervertebral joints are very common problems in society, causing significant morbidity. This generates significant consequences for work efficiency and utilization of health services. The relationship between obesity and the described diseases is ambiguous. Some studies evaluating this issue find no evidence of a link between obesity and low back pain. However, compared with people with normal weight, obese patients more often suffer from radicular pain and present neurological symptoms . Screening for symptoms and physical examination for osteoarthritis should be performed in all patients with overweight and obesity. Obesity management is an essential part of osteoarthritis treatment . Chronic Venous Disease Epidemiological studies have shown that obesity is the risk factor for varicose veins in both sexes. It has been suggested that the main mechanism of impairing venous function, particularly venous return, and possibly increasing the rate of reflux in patients with obesity is the high pressure in the abdomen . Numerous studies indicate an indisputable relationship between obesity and the development of knee osteoarthritis. A correlation was found between the diagnosis of obesity and the development of various deformities of the knee, which is believed to be a mechanical factor in the development of obesity-dependent gonarthrosis . Obese people adapt to their body weight by walking more slowly with their feet wider apart. They experience greater loads affecting the joints of the lower limbs, which predispose them to damage. Obesity is associated with structural disorders as well as impaired gait function, flattening of the arches of the foot, and excessive pronation in the ankle joint. When walking, there is an increase in the mobility of the rear foot, and this causes forefoot abduction to a greater extent than in a normal-weight person. Being overweight leads to increased pressure on the loaded joints. Postural instability leading to falls was found in people diagnosed with III-degree obesity . Obesity is a significant risk factor for pain in the neck, shoulder, elbow, wrist, and hand. Obesity in professionally active individuals predisposes to the development of tendinitis in the upper limbs. Numerous studies indicate the risk of developing ulnar nerve groove syndrome or carpal tunnel syndrome in obese patients, especially those who perform repetitive activities during their professional activity. Obesity significantly increases the risk of rotator cuff tendinitis. Obesity is also a risk factor for greater trochanteric bursitis, a common cause of lateral hip pain in middle-aged and older adults . Spinal pain syndromes caused by degenerative disc disease, stenosis of the spinal canal, and diseases of the intervertebral joints are very common problems in society, causing significant morbidity. This generates significant consequences for work efficiency and utilization of health services. The relationship between obesity and the described diseases is ambiguous. Some studies evaluating this issue find no evidence of a link between obesity and low back pain. However, compared with people with normal weight, obese patients more often suffer from radicular pain and present neurological symptoms . Screening for symptoms and physical examination for osteoarthritis should be performed in all patients with overweight and obesity. Obesity management is an essential part of osteoarthritis treatment . Epidemiological studies have shown that obesity is the risk factor for varicose veins in both sexes. It has been suggested that the main mechanism of impairing venous function, particularly venous return, and possibly increasing the rate of reflux in patients with obesity is the high pressure in the abdomen . Cholelithiasis The risk of occurrence of cholesterol gallstone formation and symptomatic cholelithiasis increases significantly in patients with obesity and is augmented by weight loss, especially if it is fast. Approximately one-third of stones are symptomatic. The incidence of new gallstone formation is 10–12% after 8–16 weeks of application of a low-calorie diet and above 30% in the first 18 months after gastric bypass surgery. The higher risk of gallstone formation has also been observed in clinical trials that assessed the efficacy and safety of GLP-1 analogs. The additional risk factors for gallstone formation during weight loss include loss of more than 25% of the initial body weight, rate of weight loss above 1.5 kg per week, a very low-calorie diet containing little or no fat, and periods of absolute fasting. Cholelithiasis may be prevented by treatment with ursodeoxycholic acid 500–600 mg per day during the first 6 months of weight loss . Stress Urinary Incontinence Obesity is a major risk factor for urinary incontinence in women, and its frequency and severity increase with an increase in BMI values and duration of obesity. Screening for urinary incontinence should be performed in all women with overweight or obesity . Asthma Asthma symptoms and severity are associated with increased proinflammatory cytokines and adipokines related to obesity. Numerous studies have shown improvement in forced vital capacity after an average 7.5% weight reduction in patients with obesity and asthma. Medical history, symptomatology, and spirometry should be considered in all patients with overweight or obesity with an increased risk of asthma and reactive airway disease . Depression and Anxiety Depression and bipolar disorder (BD) It has been shown that 30–50% of people seeking treatment for obesity had a history of depression or anxiety. The occurrence of depression symptoms in young women is an important risk factor for the development of obesity later in life. Higher BMI values than in the general population have already been observed in adolescents with depression. The association between depression and obesity seems bi-directional. The classic symptom of depression is loss of appetite and weight; however, when mood improves during the treatment, appetite and weight increase. Of note, in patients with obesity, more frequent atypical depression is observed. People with this type of depression deal with negative emotions with food. Food stimulates the release of dopamine in the reward system and temporarily improves mood. In this mechanism, depression may be the cause of the development of eating disorders. On the other hand, obesity may be a cause of the development of depression due to low self-esteem, discrimination, stigmatization, and social exclusion. Therapy in patients with obesity, especially for bipolar disorder, is often less effective than in patients with normal body weight. Some studies have shown that bipolar disorder in patients with obesity is associated with a greater degree of disability, including impairment of memory, concentration, and attention, as well as a greater relapse rate and a more severe course of the disease . The consequences of the coexistence of depression and obesity included worse patient–doctor cooperation, avoidance, and social withdrawal, decreased quality of life, greater severity of depression, greater risk of disability and job loss, suicidal thoughts, and attempts. In patients with obesity, anxiety disorders such as panic attacks and agoraphobia (fear and avoidance of being out in the open and in public places) is twice as common as in normal-weight people. All patients with obesity should be screened for symptoms of depression and anxiety in the GP’s practice using the Hospital Anxiety and Depression Scale. Body weight and metabolic parameters should be monitored in all patients treated for psychiatric diseases. The family doctor should stay in touch with the psychiatrist and undertake joint actions aimed at the effective treatment of mental illnesses and limiting its consequences for physical health. The risk of occurrence of cholesterol gallstone formation and symptomatic cholelithiasis increases significantly in patients with obesity and is augmented by weight loss, especially if it is fast. Approximately one-third of stones are symptomatic. The incidence of new gallstone formation is 10–12% after 8–16 weeks of application of a low-calorie diet and above 30% in the first 18 months after gastric bypass surgery. The higher risk of gallstone formation has also been observed in clinical trials that assessed the efficacy and safety of GLP-1 analogs. The additional risk factors for gallstone formation during weight loss include loss of more than 25% of the initial body weight, rate of weight loss above 1.5 kg per week, a very low-calorie diet containing little or no fat, and periods of absolute fasting. Cholelithiasis may be prevented by treatment with ursodeoxycholic acid 500–600 mg per day during the first 6 months of weight loss . Obesity is a major risk factor for urinary incontinence in women, and its frequency and severity increase with an increase in BMI values and duration of obesity. Screening for urinary incontinence should be performed in all women with overweight or obesity . Asthma symptoms and severity are associated with increased proinflammatory cytokines and adipokines related to obesity. Numerous studies have shown improvement in forced vital capacity after an average 7.5% weight reduction in patients with obesity and asthma. Medical history, symptomatology, and spirometry should be considered in all patients with overweight or obesity with an increased risk of asthma and reactive airway disease . Depression and bipolar disorder (BD) It has been shown that 30–50% of people seeking treatment for obesity had a history of depression or anxiety. The occurrence of depression symptoms in young women is an important risk factor for the development of obesity later in life. Higher BMI values than in the general population have already been observed in adolescents with depression. The association between depression and obesity seems bi-directional. The classic symptom of depression is loss of appetite and weight; however, when mood improves during the treatment, appetite and weight increase. Of note, in patients with obesity, more frequent atypical depression is observed. People with this type of depression deal with negative emotions with food. Food stimulates the release of dopamine in the reward system and temporarily improves mood. In this mechanism, depression may be the cause of the development of eating disorders. On the other hand, obesity may be a cause of the development of depression due to low self-esteem, discrimination, stigmatization, and social exclusion. Therapy in patients with obesity, especially for bipolar disorder, is often less effective than in patients with normal body weight. Some studies have shown that bipolar disorder in patients with obesity is associated with a greater degree of disability, including impairment of memory, concentration, and attention, as well as a greater relapse rate and a more severe course of the disease . The consequences of the coexistence of depression and obesity included worse patient–doctor cooperation, avoidance, and social withdrawal, decreased quality of life, greater severity of depression, greater risk of disability and job loss, suicidal thoughts, and attempts. In patients with obesity, anxiety disorders such as panic attacks and agoraphobia (fear and avoidance of being out in the open and in public places) is twice as common as in normal-weight people. All patients with obesity should be screened for symptoms of depression and anxiety in the GP’s practice using the Hospital Anxiety and Depression Scale. Body weight and metabolic parameters should be monitored in all patients treated for psychiatric diseases. The family doctor should stay in touch with the psychiatrist and undertake joint actions aimed at the effective treatment of mental illnesses and limiting its consequences for physical health. Obesity is diagnosed three times more often in patients with schizophrenia before the start of pharmacotherapy than in the general population. Treatment with neuroleptics is associated with a further increase in the risk of developing obesity. Moreover, schizophrenia per se is a cause of low physical activity, spending more time in bed, consumption of poor-quality food, and frequent smoking. The prevalence of components of metabolic syndrome among patients with schizophrenia is estimated at 37.0–63.0%, while in the general population, at 20.0–25.0%. Thus, at the start of the antipsychotic treatment, next to effectiveness and tolerability, is the impact of the drugs on food intake and, consequently, the development of obesity . In the course of treatment, schizophrenia should be monitored: The presence of risk factors of or clinically overt cardiovascular disease (CVD) and/or diabetes mellitus, family history of CVD, smoking status, eating habits, and level of physical activity; Body weight and height with calculated BMI, waist circumference, and blood pressure (mean value of at least two measurements during a single visit); Fasting glucose, lipid profile, serum creatinine with estimated glomerular filtration rate (GFR), and uric acid level . 5.1. Therapeutic Goals Establishing a therapeutic goal should meet the SMART business goal rule, i.e., specific, measurable, achievable, relevant, and timely. The overriding goal of obesity treatment is to slow down the progression of the disease, avoid relapses, and prevent the development of complications caused by excess fat in the body or reduce their severity, as well as overall improvement of the patient’s health and quality of life, and life extension. The overriding goal in patients without complications of obesity is to reduce the severity of the disease by one stage. While in patients with complications, this will be such a reduction in body weight that will contribute to a significant improvement in the control of these complications and will enable the reduction of doses and/or the number of drugs used, and in some less advanced cases will allow for the remission of complications and discontinuation of pharmacotherapy. Achieving such goals requires individual determination of the percentage reduction of body weight in relation to the initial value. The goal should always be set in such a way that the patient does not feel that it is so distant as to be almost unattainable. Therefore, it is worth setting 3–6 months stages, in which the goal is to reduce body weight by 5–10% of the initial one, followed by a 3–6 months period of maintaining the obtained results and, if necessary, another stage of 5–10% body weight reduction . It is believed that different percentages of initial body weight reduction are required to improve individual complications of obesity: Approximately 10–40% in body weight reduction in patients diagnosed with non-alcoholic steatohepatitis in the course of MAFLD. At least 5% to 15% in body weight reduction in patients diagnosed with the following: - Type 2 diabetes (lower HbA1c, reduce the number and/or doses of hypoglycemics drugs used, and remission of the disease, especially if it lasts a short time). - Dyslipidemia (decrease in blood triglycerides and non-HDL cholesterol, and increase in HDL cholesterol). - Arterial hypertension (reduction of systolic and diastolic pressure and reduction of the number and/or doses of antihypertensive drugs). - Polycystic ovary syndrome (return of ovulatory cycles and regular menstruation, reduction of hirsutism, improvement of insulin resistance, and reduction of androgen levels in the blood). At least 5% to 10% in body weight reduction is recommended in patients diagnosed with the following: - Male hypogonadism (increased testosterone levels in the blood). - Stress urinary incontinence (reduced frequency of episodes of incontinence). At least 7–8% in body weight reduction is recommended in patients diagnosed with bronchial asthma (improvement in terms of forced expiratory volume in 1 s and reduction in the severity of symptoms). At least 7–10% in body weight reduction is recommended in patients diagnosed with obstructive sleep apnea. At least 10% in body weight reduction is recommended in patients with the following: - Prediabetes (preventing the development of type 2 diabetes and improving glucose levels). - Improving female infertility (return of menstrual ovulation cycle, pregnancy, and the birth of a live newborn). - Osteoarthritis (reduction of pain and improvement of motor function); - Gastroesophageal reflux (reduced symptoms). At least 5% in body weight reduction is recommended in patients with the following: - Steatosis stage in the course of MAFLD (reducing lipid accumulation in the liver and improving metabolic function) . It is very important to set partial goals both in terms of the effect and the changes leading to them because the small-step method allows the patient to better adapt to changes and does not put pressure on them to achieve the effects . To avoid patient disappointment and discouragement, one should explain to them that it is for their health, and slow is beneficial (approx. 1 kg/week in the first month) and approx. 0.5 kg/week. in the following months), but permanent weight loss. The main reason for losing weight is improving health, not the number of kilograms lost. Slow but systematic weight loss as a result of the use of a balanced diet and increased physical activity lowers blood pressure, serum glucose, and lipid levels, improves the quality of life, and in many people with diseases accompanying, allows one to reduce the number of drugs used. Too fast, significant weight loss causes significant loss of lean mass and increases the risk of developing gallstones and fatty liver, and occurrences of the ‘yo-yo’ effect. 5.2. Rule of the Five A’s in the Treatment of Obesity in a GP’s Practice This tool is derived from smoking addiction counseling and was also proposed many years ago for the treatment of obesity. It has been observed that the use of all elements of the five A rule significantly increases the achievement of therapeutic success. The use of rule the five A’s include: ASK—asking questions should be a motivational interview. During the interview, the patient should be made aware of the impact of their body weight on general health and quality of life. Avoid embarrassment, guilt, and stigmatization during the conversation. Always use adequate medical vocabulary and emphasize that obesity is a disease that can and should be treated. One should also avoid judging the patient during the interview. However, the assessment of the patient’s readiness for change cannot be avoided. There are many standardized methods of assessing readiness for change, but in the conditions of everyday clinical practice, it is enough to ask the patient the following five questions: (1) Does the patient want to be treated for obesity to improve their health? (2) Does the patient want to change his or her eating habits permanently and does not see it as a struggle? (3) Does the patient feel that their current way of eating is harmful to them? (4) Is the patient aware that the treatment will be long and is ready to cooperate with their doctor? (5) Will the patient try to accept the proposed treatments? If the patient is not ready to change, methods should be implemented to motivate them to make the change. In addition, the patient’s sense of self-efficacy should be built by explaining to them that he is not expected to make a complete revolution in their life and that treatment will be based on small, gradual changes. 2. ASSESS—assessment of the causes of weight gain, health status, and occurrence of complications caused by excess fat in the body. It is very important to correctly and fully determine the cause of weight gain, especially emotional eating and eating disorders (BED and NES). The patient’s physical health can be assessed on the basis of a 100-point visual analog scale (VAS). Screening for depression (the Beck scale) and the Hospital Anxiety and Depression Scale (HADS) should also be performed. Anamnesis should also be taken with the patient regarding chronic diseases, and in the absence of a prior diagnosis of obesity complications, their diagnosis should be undertaken. 3. ADVICE—presenting treatment options that can be used in a particular patient. In the selection of therapeutic methods, the primary cause of obesity should be considered, followed by the stage of the disease and the occurring complications. It is very important that, during the conversation with the patient about the recommendations, they have a sense of understanding. In addition, the patient should be made aware that the treatment process will be long and requires commitment from them and that the doctor and other members of the therapeutic team are there to help them overcome difficulties. The patient should be presented with all therapeutic options that should be used in their case and discuss the benefits and possible risks associated with them. 4. AGREE—obtaining the patient’s consent to the proposed therapeutic goal and treatment plan. It is necessary to be aware that it is the patient who implements the doctor’s recommendations; therefore, they cannot be arbitrary and must consider the patient’s capabilities and the degree to which they are willing to comply with the recommendations. In other words, this stage is a compromise between what the patient should do, according to the doctor, and what the patient can and wants to achieve. At this stage, negotiations should be conducted with the patient based on respect for their autonomy and their right to choose. However, the choice should be conscious, i.e., the consequences should be explained to the patient. Obtaining the patient’s acceptance of the proposed therapeutic goal and treatment plan may require many discussions. This should not discourage the doctor from taking them. In addition, the physician must be willing to modify his recommendations based on the needs and capabilities of the patient. It is very important at this stage to work on realizing the patient’s expectations regarding weight loss. The patient should also be made aware that meeting the behavioral change goals is more important than weight loss itself because this will ultimately help them achieve the intended weight reduction. Success for each patient will have a different dimension, but it is important that the patient focuses on improving mental and physical health, not on the number of kilograms lost. 5. ASSIST—supporting the patient in the therapeutic process. After agreeing on their therapeutic goal, the doctor should help the patient identify barriers that may hinder treatment (social, medical, emotional, and economic) and factors that facilitate treatment (motivation and social support). The role of the doctor is to identify the causes of the disease, educate, recommend adequate therapeutic methods, and support the patient in their implementation. An important element of support is setting the schedule of follow-up visits, determining their frequency, and informing the patient what will be checked during the visit, which will make it easier for the patient to implement the recommendations. The schedule should specify the number of visits necessary to achieve the therapeutic goal, minimum and maximum time intervals between visits (the exact date of the next visit should be determined at the previous visit), parameters that will be checked during the visit, and what should be brought to the next visit (e.g., physical activity and results of additional tests). At each follow-up visit, new problems that make it difficult for the patient to comply with the recommendations should be identified, and solutions or other therapeutic methods should be introduced to eliminate the existing problems . 5.3. Nutritional Interventions The term ‘diet’ defines a way of eating; therefore, everything a person eats is a diet. However, in the common consciousness, diet is associated with a special way of nutrition (elimination of many foods), which—used for several days or weeks—will lead to weight loss body, after which one can eat as before . That is why it is better to talk to the patient not using the word ‘diet’, just to make a permanent change in eating habits. The energy content of the diet should be determined individually. The simplest is to apply a formula to determine the total energy expenditure. Basic energy expenditure (basal metabolic rate (BMR) × coefficient physical activity) BMR: For men = 11.6 × body weight (kg) + 879 kcal; For women = 8.7 × body weight (kg) + 826 kcal. The physical activity factor: For people who lead a sedentary lifestyle—1.3; Moderately active—1.5; Regularly physically active—1.7 . From the calculated energy expenditure determining the energy content of the diet, it is necessary to subtract about 500–600 kcal to obtain approximately 0.5 kg weight loss per week or 1000 kcal for a loss of approx. 1 kg per week. Reassessment of the energy content of the diet should be made in accordance with the above data each time the body weight stops reducing . A diet should be varied and contain all the necessary food ingredients. In the selection of recommended foods, individual patients’ preferences should be considered. The proportion of food macronutrients recommended by the WHO is as follows: about 20% of the energy content of the diet should be proteins, about 25% fats, and about 55% carbohydrates . No more than 10% of energy may come from fats containing saturated fatty acids (SFA). At least 6% of this energy should provide polyunsaturated fatty acids (PUFA), and the rest should provide monounsaturated fatty acids with cis configuration (MUFA). It should be noted that monounsaturated fatty acids with the configuration trans (trans fatty acids—TFA) should not exceed 1% of the incoming energy from fats . The main sources of SFA in the diet are butter and lard, beef tallow, as well as oils: coconut and palm, and also cocoa, nut, and vegetable butter (these kinds of butter are the main ingredients of chocolate) . The main food sources of MUFA are olive oil and other vegetable oils . TFAs are mainly delivered from fast food, cakes, and cookies that contain industrially hydrogenated vegetable oils included in the composition of shortenings, fries, and margarine . Note that in intake, PUFA ω-6 and ω-3 should be maintained at a proper 4:1 ratio. Foods rich in ω-3 fatty acids are herring, tuna, salmon, sardines, mackerel, trout, and oil fish. The main food sources of ω-6 acids (>60%) are oils: soybean, sunflower, safflower, evening primrose, and oils from grape seeds, poppy seeds, borage of medicine, and blackcurrant. Approximately 40–50% of these fatty acids include oil, wheat germ, corn, nuts, walnuts, cottonseed, and sesame . Simple carbohydrates (e.g., glucose, fructose, lactose, xylitol, and sucrose) should provide <10% of energy. Dietary fiber should provide wholegrain bread, other grain products and vegetables, fruits, and plant legumes . Reducing the amount of food alone should not be recommended, but most of all, changing its quality (e.g., consumption of fewer dairy products fat, cooking or roasting meat instead of frying, cooking soups on vegetable stocks, without roux and with yogurt instead sour cream, and not using mayonnaise for salads). The patient should be made aware from the outset that the changes it introduces must be permanent. However, this does not mean that there are any foods that they will not be able to eat until the end of their life. If they eat a high-energy product very rarely, e.g., once a quarter, this will not cause weight gain. Regular consumption should be recommended (at similar times) 3–5 meals a day, finishing eating with a feeling of incomplete satiety, eating between meals (in the case of not feeling very hungry, one can drink a glass of water or eat a vegetable, not fruit), not eating food while watching TV or reading or computer use, and coping with stress other than through overeating. The distribution of energy when eating five meals: Breakfast—25%; Second breakfast—15%; Lunch—35%; Afternoon tea—10%; Dinner—15%. The distribution of energy when eating three meals: Breakfast—40%; Lunch—40%; Dinner—20% . Popular ‘miracle diets’ are not recommended. Both high-fat and high-protein diets, with significantly higher-than-recommended amounts of cholesterol, promote the development of atherosclerosis. Moreover, they are ketogenic diets, which on the one hand, have an effect of inhibiting the feeling of hunger, but on the other hand, lead to acidification of the body and electrolyte disorders. High-protein diets also contain higher-than-recommended phosphate content, which causes calcium malabsorption and may develop osteoporosis with prolonged use. Low-energy and very low-energy fat-free diets cause significant weight loss, which promotes the ‘yo-yo’ effect and also has a ketogenic effect . Recently published studies indicate that the use of ‘miracle diets’ is a risk factor for the development of emotional eating and eating disorders . 5.4. Behavioral Therapy Lifestyle-changing therapy for patients who are overweight or obese should contain behavioral intervention that improves adherence to reducing dietary recommendations energy of meals and affects increased physical activity. Behavioral intervention may include self-control in terms of body weight and consumption of meals and physical activity, clear and precise defining the goals of therapy and education on obesity, nutrition, and physical activity, individual and group conversation, stimulus control, systematic solving emerging problems, reducing stress, cognitive behavioral therapy, motivational interview, behavioral agreement, psychological counseling, and social support mobilization . If the patient fails to achieve a 2.5% reduction in body weight in the first month of treatment, intervention and support should be stepped up to behavioral, as early body weight reduction is a key, long-term indicator of success in losing body weight . The GP should discuss with the patient realistic treatment goals. The goal is to lose weight by about 10% in 3–6 months, then maintain this reduced weight for several months, and then act to reduce body weight further. The family doctor should also explain to the patient that: - Losing weight too quickly is not beneficial for health (risk of developing liver steatosis and gallstones) and is associated with risks such as the ‘yo-yo’ effect (loss of lean mass and lowering the level of basic expenditure energy); - The use of a very restrictive diet may cause deficiencies in vitamins and microelements; - Treatment is not a short period of dieting, but a permanent change in lifestyle, including habits, nutrition, and increasing physical activity, and any unfavorable change in this aspect will lead to disease relapse; - The real success is long-term maintenance weight loss of at least 10% from the initial body weight, not the number of kilograms that the patient will be rid of. The family doctor should also conduct an analysis of the patient’s eating habits that must be eliminated: - Eating while watching TV; - Calming oneself with food; - Eating foods with the wrong composition; - Eating in a hurry; - Eating under the influence of the greatest hunger; - Eating between meals; - Irregular eating habits. The GP should advise the patient to keep a food diary for at least 3 months. In the diary, before eating a meal, the patient records the time of consumption, composition, weight, and caloric value. Remember to save as well all fluids consumed except water, unsweetened coffee, and tea. It is also worth recording the patients’ physical activity as possible problems may arise with insufficient lifestyle changes . 5.5. Physical Activity Aerobic exercise should be recommended (prescribed) to patients who are overweight and obese as a part of lifestyle intervention. It may be initially advisable to recommend a gradually increased amount and intensity of exercise; ultimately, this should be at least 150 min per week of moderate-intensity exercise divided into 3–5 sessions. In the treatment of weight gain and its prevention in a patient implementing the program, 60–90 min of moderate daily exercise in leisure time is recommended for weight loss. A dynamic, aerobic effort is recommended, involving large muscle groups. Recommended forms of physical activity for obese adults: brisk walking, cycling, swimming, water exercises, and Nordic walking. Resistance exercise should be recommended (prescribed) to patients undergoing an intervention weight loss for supporting loss of body fat while maintaining lean mass; ultimately, these should be single sets of engaging resistance exercises for major muscle groups performed 2–3 times a week. In addition to aerobic exercise, the patient should do resistance exercises 2–3 times a week 12–15 repetitions each, with a commitment of 30–50% of maximum muscle strength. The target training heart rate should be 60–70% maximum heart rate (220 minus age) in people without cardiovascular disease, and in people with cardiovascular disease, 40–70% heart rate reserve (highest heart rate achieved during the exercise test minus resting heart rate) plus resting heart rate. Absolute contraindications to treatment movement are decompensated circulatory failure, unstable coronary artery disease, and respiratory failure. Carefully, under medical and rehabilitation supervision, physical activity should be recommended in patients with a BMI > 40. All overweight or obese patients, apart from physical exercise, should be encouraged to spend their free time actively to reduce their sedentary lifestyle. In order to improve the engagement for an individual’s plan of activity, the involvement of trained and certified fitness professionals should be considered . 5.6. Psychotherapy All patients diagnosed after screening for depression or anxiety should be referred to a psychologist. Indications for patient referral to a psychologist dealing with eating disturbances also include the following: - Emotional eating; - Low self-esteem; - Suspected NES; - Suspected BED; - Suspected food addiction. The main recommendation is cognitive behavioral therapy (CBT). It is a combination of behavioral therapy (oriented to change behavior) and cognitive trends, referring to the patient’s perception and understanding of the world, their thoughts, beliefs, imagination, and goals. CBT helps the patient to identify and possibly change their own cognitive constructs (concerning, for example, themself, life situation, illness, and future) and shape new behaviors and skills that will be helpful in achieving their assumed goals. The beliefs subjected to analysis and modification primarily relate to issues related to obesity, its consequences, and the possibility of regulating body weight. Changing behaviors, in turn, concerns those activities that are directly related to weight loss and maintaining the achieved results. CBT in the treatment of obesity should include the following elements: - Self-monitoring (e.g., keeping a food diary); - Techniques to control the eating process (e.g., slow chewing); - Control of stimuli and their reinforcement or reduction (e.g., shopping according to a list); - Additional cognitive techniques; - Relaxation techniques . Another useful treatment of obesity is interpersonal therapy (IPT), which combines elements of cognitive behavioral and psychodynamic approaches (attachment theory). Interpersonal therapy is considered to be particularly effective in treating BED . Many studies also confirm the effectiveness of psychodynamic therapy in the treatment of patients with obesity. This trend primarily focuses on early childhood experiences, unconscious drives, internal conflicts, as well as mental defense mechanisms. It aims to thoroughly analyze the mechanisms of the patient’s mental functioning and gain insight due to the reference of subjective tools (e.g., interpretations) and phenomena (e.g., transference and countertransference, free association, dreams) . 5.7. Pharmacotherapy There is no drug that can cure obesity. Currently, available drugs can only support the treatment of obesity through various mechanisms of action. Therefore, pharmacotherapy for overweight and obesity should be used only as an adjunct to lifestyle therapy and not alone . Pharmacotherapy in the treatment of obesity should be used chronically as long as it is effective and well tolerated because obesity is a chronic disease. Short-term pharmacotherapy use (3–6 months) does not cause long-term health benefits and cannot be recommended . Short-term pharmacotherapy use may be associated with short-term weight loss followed by the ‘yo-yo’ effect and negatively affected health . The choice of pharmacotherapy should be individual because of the heterogeneity of responses to obesity interventions, including medication . The current standard selection of pharmacotherapy includes physician/patient preference, medication interaction, comorbidities, efficacy, and risk of potential adverse events . However, new data support the concept that the primary cause of obesity development and the drug’s mechanism of action should be the first criterion for choosing a drug . This approach has already been included in the guidelines of seven Polish Scientific Associations and the Canadian guidelines . There are currently four drugs registered in the European Union that help reduce body weight: orlistat, a drug composed of hydrochloride naltrexone and hydrochloride bupropion, and long-acting GLP-1 analogs (liraglutide and semaglutide). Pharmacological treatment is indicated in patients with obesity (BMI ≥ 30 kg/m 2 ) or overweight (BMI ≥ 27 kg/m 2 ) with ≥1 complication of obesity in a patient in whom non-pharmacological treatment has failed to achieve the therapeutic goal . Pharmacotherapy can also be used at the stage of maintaining the effects achieved over time with non-pharmacological treatment and after surgical treatment of obesity . If, after 3 months of using pharmacotherapy, weight loss is less than 5% in patients without a diagnosis of type 2 diabetes and less than 3% in people diagnosed with this disease (counting weight loss from drug application), its continuation is unjustified. However, it should be stressed that if the use of pharmacotherapy had no effect, do not wait until 3 months have passed but discuss the implementation of recommended diet and physical activity. In addition, psychological problems should be analyzed, and the use of the prescribed pharmacotherapy checked . Orlistat (tetrahydrolipstatin, a derivative of lipostatin produced by Streptomyces toxytricini ) is used orally at a dose of 120 mg three times a day before main meals. In randomized trials, taking orlistat for one year resulted in a weight loss of ~3 kg more than in the placebo group. This drug inhibits the activity of lipases in the gastrointestinal tract: gastric, pancreatic, and intestinal, and prevents the digestion and absorption of some of the fats taken with food. It does not affect the feeling of satiety, hunger, or appetite. It is absorbed from the gastrointestinal tract in trace amounts (1% of the dose), and its metabolites are inactive; therefore, it has no systemic effect. The use of orlistat is justified only in people who prefer fatty foods and have problems with modifying eating habits and who are aware of the drug’s mechanism of action and possible side effects. Consumption of food that contains too many fats results in increased frequency of bowel movements, loose and liquid stools, fatty stools, an urgency to defecate, fecal incontinence, bloating, and abdominal pain. The patient should be warned that these are the effects of nutritional errors, and reducing the consumption of fats will eliminate their occurrence. Patients using lipophilic drugs should wait for ≥2 h between taking them and using orlistat. Contraindications are hypersensitivity to the drug, pregnancy and lactation, cholestasis, and chronic malabsorption syndromes . Indications for the use of orlistat: Obesity (BMI ≥ 30 kg/m 2 ); Overweight (BMI ≥ 27 kg/m 2 ), with obesity complications, such as hypertension, lipid disturbances, ischemic disease, myocardial infarction, type 2 diabetes, sleep apnea, or PCOS. Contraindications to the use of orlistat: Chronic malabsorption syndrome; Cholestasis; Pregnancy; Breast-feeding; Hypersensitivity to orlistat. Combined preparation containing two active substances, naltrexone hydrochloride and bupropion hydrochloride, is in one tablet. The prolonged-release tablet contains 7.2 mg of naltrexone and 78 mg of bupropion (equivalent to 8 mg of naltrexone hydrochloride and 90 mg of bupropion hydrochloride). Treatment begins with taking one tablet in the morning for a week, then the next week—one tablet in the morning and one in the evening. In the next two tablets, one in the morning and one in the evening, and in the 4th week, the target dose is introduced as two tablets, one in the morning and two in the evening (the daily target dose is 28.8 mg naltrexone and 312 mg bupropion, equivalent to 32 mg naltrexone hydrochloride and 360 mg bupropion hydrochloride). If, after 16 weeks of using the preparation, the patient’s body weight has not decreased by ≥5% of the initial value, the drug should be discontinued . Naltrexone and bupropion act on the same regions of the central nervous system (the arcuate nucleus of the hypothalamus and the mesolimbic dopaminergic reward system), and the combination has a hyperadditive effect on the regulation of food intake. This allows them to be used in lower doses, which reduces the risk of side effects and promotes better tolerance of treatment . Bupropion is a dopamine and norepinephrine reuptake inhibitor (NDRI) and a non-competitive nicotinic receptor antagonist of the β-ketoamphetamine class. It is used alone to treat depression, seasonal affective disorder, and nicotine addiction. Naltrexone, in turn, is an antagonist of the µ-opioid receptor, to a lesser extent of the κ-receptor, and to an even lesser extent of the γ-receptor. At a dose of 50 mg, it is used in the treatment of non-opioid addictions, primarily from alcohol (supporting abstinence by reducing the need to drink) . Bupropion in the arcuate nucleus of the hypothalamus stimulates the activity of POMC-secreting neurons and, as β-ketoamphetamine, the release of cocaine- and amphetamine-regulated transcript (CART), which in turn stimulates the release of α-melanocortin (α-MSH), which binds to melanocortin type 4 receptors (MC4-R), stimulating the feeling of satiety. The feedback that inhibits the release of POMC is the increase in the release of β-endorphin by this neurotransmitter; however, naltrexone—by blocking the µ-opioid receptors—inhibits the feedback loop and, as a result, prolongs the feeling of satiety. Naltrexone and bupropion also reduce food intake stimulated by appetite (the hedonistic search for a specific food not to satisfy hunger but to feel pleasure from its consumption), which is the responsibility of the reward system along with its main neurotransmitters: dopamine, norepinephrine, and endogenous opioids. Bupropion inhibits the reuptake of dopamine and noradrenaline (inhibition of the drive to seek food), and naltrexone blocks opioid receptors, which stimulates the secretion of endogenous opioids (reduces the ‘liking’ of tasty food) . In clinical trials, the most common adverse drug reactions to this combination product were nausea, vomiting, headache, dizziness, insomnia, and dry mouth. They usually spontaneously disappear within the first 4 weeks of treatment . Indications for the use of the drug include supplementing with this drug a diet with reduced energy content with increased physical activity for weight loss in adult patients (≥18 years old) with a baseline BMI of either: BMI ≥ 30 kg/m 2 (obesity); BMI 27 kg/m 2 to <30 kg/m 2 (overweight) if the patient has one or more complications of obesity (e.g., type 2 diabetes, dyslipidemia, compensated hypertension). Contraindications: Hypersensitivity to any substance active or auxiliary; Uncontrolled high blood pressure; Current epilepsy or seizures in the interview; A tumor of the central nervous system; The period immediately following an abrupt withdrawal from alcohol or benzodiazepines in an addicted person; History of bipolar disorder; Taking bupropion or naltrexone for another indication other than weight loss; Bulimia nervosa or anorexia nervosa now or in the past; Addiction to long-term use of opioids or opiates (e.g., methadone) and shortly after their discontinuation in the addicted person; Taking monoamine oxidase inhibitors (MAOI); Severe liver problems; End-stage renal failure or severe disorders of kidney function; Pregnancy and breastfeeding. The use of this drug as the first choice is recommended in patients diagnosed with emotional eating, BED, NES, food addiction, depression, and smoking cessation . Liraglutide is a long-acting GLP-1 analog that is used s.c. 1 × daily at the target dose of 3 mg/d. Treatment of obesity is started with a dose of 0.6 mg/d and increased weekly by 0.6 mg/d until a dose of 3 mg/d is reached. If the drug is poorly tolerated for another 2 weeks after increasing the dose, discontinuation of the drug should be considered. Treatment should be discontinued if, after 12 weeks of use at a dose of 3.0 mg/d, the patient has not lost ≥5% of the initial body weight. Liraglutide, such as natural GLP-1, affects target cells, producing effects analogous to the natural hormone. The main mechanism leading to weight loss depends on the direct activation of GLP-1 receptors located in the central nervous system and the downstream activation of GLP-1 afferents, including neurons of the autonomic nervous system. GLP-1 receptors are found in many structures of the central nervous system, including solitary tract nuclei and POMC/CART anorexigenic neurons of the hypothalamus, and their activation is responsible for the feeling of satiety. The concomitant inhibition of hunger is the result of indirect inhibition of neurotransmission in NPY- and AgRP-expressing neurons through γ-aminobutyric acid (GABA)-dependent signaling. The additional mechanism of increased satiety is slowing down gastric emptying . Experimental studies in rats also indicate that reduced food intake may be related to nausea, which is induced by the effect of liraglutide on GLP-1 receptors in the solitary tract nucleus . Liraglutide also acts in many peripheral tissues. The incretin effect exerted by GLP-1 agonists was the first to be discovered, including GLP-1-stimulated increased glucose-dependent insulin secretion from pancreatic β-cells, which is used in the treatment of type 2 diabetes, where the target dose is 1.8 mg/d (liraglutide has been approved under a different trade name for the treatment of diabetes) . Based on a trial performed on patients with type 2 diabetes and treated with liraglutide at a dose of 1.8 mg/d, this treatment did not increase the risk of cardiovascular complications . However, there are no prospective clinical trials conducted in patients with obesity but without type 2 diabetes and treated with liraglutide in a dose of 3.0 mg/d. The improvement of cardiometabolic parameters in people without type 2 diabetes primarily depends on the reduction of body weight. Indications are similar to indications for the use of other drugs supporting the treatment of obesity; the use of liraglutide 3 mg in the treatment of overweight and obesity should be considered as an adjunct to lifestyle modification in patients: (1) With a BMI ≥ 30 kg/m 2 (obesity); (2) With a BMI of 27–30 kg/m 2 (overweight) if accompanied by ≥1 of complications related to excessive body weight (including prediabetes or type 2 diabetes, hypertension, lipid disorders, or obstructive sleep apnea). The effectiveness of treatment is assessed after 12 weeks of using liraglutide in a full dose of 3.0 mg 1 × daily s.c.; it may be continued if body weight has decreased by ≥5%. The most common side effects are nausea, vomiting, diarrhea, and constipation, which are usually temporary . This drug should be the first choice in the treatment of obesity in patients with prediabetes or type 2 diabetes, as well as clinical features of insulin resistance after the exclusion of emotional eating, BED, food addiction, and NES . Studies conducted using functional magnetic resonance imaging confirmed the lack of influence of GLP-1 analogs on the reward system and their lower effectiveness in people with emotional eating to stimulate the reward system . Contraindications for the use of liraglutide, apart from hypersensitivity to the active substance or excipients, include a family history of medullary thyroid cancer, a history of pancreatitis, and pregnancy. Semaglutide is a very long-acting GLP-1 analog that is used s.c. once a week at a target dose of 2.4 mg/week. Treatment of obesity starts with a dose of 0.25 mg/week. After 4 weeks, it is increased to 0.5 mg/week, after another 4 weeks to 1 mg/week, after another 4 weeks to 1.7 mg/week, and a further 4 weeks to the target dose of 2.4 mg/week. If the drug is poorly tolerated 2 weeks after increasing the dose, discontinuation should be considered. If severe gastrointestinal symptoms occur, consideration should be given to delaying dose escalation or reverting to the previous dose until symptoms have improved. Due to the long half-life, the drug should be discontinued 2 months before the planned pregnancy. The mechanism of action of semaglutide is similar to that of liraglutide . Similar to liraglutide was originally registered for the treatment of type 2 diabetes, where the target dose is 1 mg/week (for the treatment of diabetes, semaglutide has been registered under a different trade name). To date, no prospective clinical trials have been conducted to evaluate the effect of semaglutide on cardiovascular risk in non-diabetic subjects. The improvement of cardiometabolic parameters in people without type 2 diabetes primarily depends on the reduction of body weight. Indications are similar to those for the use of other drugs supporting the treatment of obesity; the use of semaglutide 2.4 mg in the treatment of overweight and obesity should be considered as an adjunct to lifestyle modification in patients: (1) With a BMI ≥ 30 kg/m 2 (obesity); (2) With a BMI of 27–30 kg/m 2 (overweight) if accompanied by ≥1 of complications related to excessive body weight (including prediabetes or type 2 diabetes, hypertension, lipid disorders, obstructive sleep apnea, or cardiovascular disease). The most common adverse drug reactions are nausea, vomiting, diarrhea, constipation, and abdominal pain, which are usually temporary. Due to the rapid initial weight loss, there is also a risk of developing gallstones . Contraindications to the use of semaglutide, apart from hypersensitivity to the active substance or excipients, include a family history of medullary thyroid cancer, history of pancreatitis, and pregnancy. If patients have psychogenic eating disorders, pharmacotherapy with liraglutide and semaglutide may be less effective, and it is suggested that a combination of naltrexone and bupropion should be considered first. Some authors also propose considering the use of polytherapy with liraglutide and naltrexone with bupropion . The safety of the combination of naltrexone/bupropion and a long-acting GLP-1 analog was confirmed in the recently published post hoc analysis of the LIGHT study . It should be noted that a systematic review and meta-analysis of randomized placebo-controlled trials showed that the use of GLP-1 analogs is associated with an increased risk of developing gallstones or cholelithiasis, and this risk increases with higher doses, longer duration of use, and use for weight loss . In addition, the analysis of cases reported in the European Pharmacovigilance Database showed that the use of GLP-1 analogs is associated with a higher risk of developing thyroid cancer . In a recently published study, which analyzed a total of 2526 cases of patients with thyroid cancer compared with 45,184 people from the control group, it was shown that the use of GLP-1 analogs for 1–3 years was associated with an increased risk of all thyroid cancers . Attention should also be paid to the increased risk of tachycardia and arrhythmia during treatment with semaglutide . According to the ESE guidelines from 2020, it is not recommended to use metformin solely for weight reduction (no registration in this indication), and indications for the use of this drug are prediabetes and type 2 diabetes . Other drugs: there is insufficient evidence to support the use of herbal medicines, dietary supplements, probiotics, or homeopathy in the treatment of obesity. The results of single studies indicate that the use of fiber preparations containing soluble and insoluble fibers may increase the effects of non-pharmacological treatment of obesity. 5.8. Bariatric Surgery 5.8.1. Requirements for Reference Centers Bariatric surgeries should be performed in centers specializing in this type of surgery, able to choose the optimal (medical indications and patient’s preferences) surgical method together with the patient, having substantive preparation and appropriate equipment. This is not only the equipment of the operating room (e.g., operating table and laparoscopic tools) but also the equipment of the ward (hospital beds with a load capacity of 250–300 kg, couches, wheelchairs, chairs, and bariatric platforms) and sanitary facilities (shower cabins adapted for people with obesity equipped with appropriate handles and handrails) . 5.8.2. Qualification Proper qualification for surgical treatment of obesity is one of the key factors affecting its results. It is recommended that non-surgical treatment should be attempted before considering surgery . The primary criterion for qualifying for bariatric surgery is the patient’s BMI. Until recently, the second crucial criterion assessed during qualification was the patient’s age . Currently, there is no age limit for patients undergoing bariatric surgery; however, a careful selection of older patients is recommended, in whom frailty assessment is critical, which, more than age, is associated with a higher rate of postoperative complications. It is worth noting that before surgery, it is recommended to lose 5–10% of body weight (among others, it has a positive effect on the results of bariatric treatment and reduces the risk of perioperative complications) . Eligibility criteria based on BMI: BMI > 40.0 kg/m 2 ; BMI 35.0–39.9 kg/m 2 in a patient with obesity complications (e.g., type 2 diabetes, hypertension, severe joint diseases, dyslipidemia, a severe form of OSA). The latest guidelines recommend surgery in patients with BMI in this range, regardless of obesity complications; BMI 30.0–35.0 kg/m 2 and uncontrolled type 2 diabetes despite appropriate pharmacological treatment . Contraindications to bariatric procedures: Mental disorders—personality disorders, severe depression; Alcoholism; Drug abuse; Eating disorders; No possibility of proper, long-term postoperative care; Poor long-term prognosis due to life-threatening diseases . The final qualification of the patient for the operation must be multidisciplinary. It is a decision of a team of specialists experienced in the treatment of obesity: a surgeon, an internist, an anesthesiologist, a psychologist or a psychiatrist, a dietitian, a physiotherapist, and, if necessary, a cardiologist, a pulmonologist, a gastroenterologist, and a neurologist . The optimal time to prepare the patient for surgery should not be shorter than three months but longer than 6–12 months . The preoperative assessment of patients scheduled for bariatric surgery should be much broader than any other major abdominal surgery. The success of surgical treatment of obesity requires a good understanding of the entire treatment process by the patient, not only the surgery itself. Therefore, the patient must be provided with information on the benefits and consequences and the risks associated with the operation. A key, unfortunately often overlooked, aspect when qualifying a patient for surgery is a psychological consultation (comprehensive assessment sometimes requires several meetings with a psychologist), which includes behavioral, nutritional, and family psychological assessment, and personality factors should be an integral part of the patient’s preoperative assessment . Working with a psychologist is one of the elements aimed at increasing the safety and effectiveness of surgical treatment by identifying specific areas to create an individually tailored treatment plan. 5.8.3. Types of Operation There are at least a dozen different types of operations to choose from, more or less changing or modifying not only the anatomy but also the physiology of the digestive system, and characterized by a different number and type of long-term complications. The common feature of all surgical methods is the preferred access method—laparoscopy 2D or 3D . Detailed qualification criteria for a given type of bariatric operation go beyond the thematic framework of this study. It should be emphasized that bariatric treatment is personalized, and each element (mainly surgical procedures) should be individually selected for the patient. 5.8.4. Post-Treatment Monitoring and Intervention In-Hospital Perioperative care for patients undergoing bariatric treatment should be organized according to the Enhanced Recovery After Bariatric Surgery principles . Patients with morbid obesity have an increased risk of partial atelectasis in the distal alveoli. In the postoperative period, in the recovery room, it is recommended to administer supplemental oxygen . For patients diagnosed with obstructive sleep apnea, it is necessary to use CPAP, i.e., breathing support that prevents the collapse of the alveoli. In the postoperative period, an important aspect is appropriate respiratory rehabilitation and breathing exercises. Each patient should be mobilized on the first day after returning from the post-operative observation unit. Patients can drink fluids after returning from the recovery room . On the first postoperative day, the patient orally takes fluids (no daily volume limit) and oral nutritional support (ONS). Patients are encouraged to drink fluids while walking around the hospital ward (simultaneous activation). In the following days, gradually expand the diet. Each patient receives proton pump inhibitors (PPI) twice a day, antithrombotic prophylaxis, and non-opioid analgesia until discharge. Discharge occurs on either postoperative day 1 or 2 upon meeting specific discharge criteria: Patient tolerates oral diet and drinks at least 1000 mL of fluids per day; Does not require intravenous fluids; Postoperative pain is manageable with oral medication; The level of physical activity is similar to that before the operation; After discharge, the patient will remain under the care of third parties and, if necessary, contact with the treatment center is ensured; There were no complications that required hospitalization . Upon discharge, patients receive a follow-up visit plan for 1 year, a baseline dietary plan, and a prescription for PPI, antithrombotic prophylaxis, vitamin supplementation, and ursodeoxycholic acid (gallstone prevention) . Perioperative Monitoring for up to 30 Days The wound should be kept clean and washed every day with a disinfectant and dressing applied. Stitches should be removed 7–10 days after the surgery; for patients with diabetes, this time may be longer. It can be performed by a primary care physician or in a surgical outpatient clinic. Patients with fever, vomiting, wound discharge, abdominal pain requiring opioids, and dehydration symptoms should be referred to a surgical unit. A follow-up visit with a surgeon is necessary during the first 30 days. Every patient is prescribed low-molecule heparin injections. There are no clear standards for how long the antithrombotic prophylaxis should be administered; however, it should not be shorter than 7 days after discharge . PPI should be taken twice a day for 30 days. A longer period should be considered in patients with symptoms of gastroesophageal reflux after surgery. Rapid weight loss is associated with an increased risk of the development of gallstones. Recently, several randomized and observational studies have shown that the postoperative supply of ursodeoxycholic acid significantly reduces the risk of the development of biliary stones. The debate regarding the duration of such prophylaxis and the dose is still ongoing; however, at the moment, it is advisable to consider the use of 500 mg ursodeoxycholic acid in a divided dose for 6 months . From 2–4 weeks, the patient can take solid foods, depending on tolerance. Quick and effective activation of the gastrointestinal tract after bariatric surgery would not be possible without proper patient education in the preoperative period. In the perioperative and postoperative periods, the patient should also be consulted by a dietitian who will coordinate further nutrition of the patient. Due to the initial difficulty in taking food, patients need to have an adequate protein supply of about 1–1.5 g of protein per kilogram of ideal body weight. Initially, the most important aspect of nutrition for patients after surgery is an adequate amount of fluids to avoid dehydration. An operation with a large malabsorptive effect may require an even greater supply of protein. Each patient, during the first 30 days, should have a follow-up visit with a dietitian to assess their diet plan. Patients should resume physical activity straight after surgery. For 6 weeks, patients should not perform exercises involving the abdomen. Preferably, physical activity should be planned and supervised by a physiotherapist. Monitoring during the First Year after Surgery Patients after bariatric surgery require follow-up visits to determine weight loss, remission of obesity complications, complications associated with surgery, assessment of nutritional status, and potential qualification for revisionary surgery. The success of weight loss should be assessed as a % of excessive weight loss (%EWL). More than 50% EWL is considered a success. Obesity complications, such as type 2 diabetes and hypertension, need to be closely monitored by primary care physicians or specialists to reduce the dose of medication at a proper time. The remission should be assessed according to treatment standards for specific diseases. Female patients of childbearing age should be advised to be on medically effective contraception for 12 months following the surgery—pregnancy during the time of weight loss is not recommended. Malnutrition after bariatric surgery is common. It partly depends on the type of surgery performed (it is more common after RYGB than SG) and also on the initial nutritional status of the patient. People with obesity are usually deficient in vitamins, particularly vitamin D, B 12 , thiamine, and folic acid, as well as calcium, iron, zinc, and copper. In recent years, easily absorbable products for bariatric patients have appeared on the market, which contain all the necessary ingredients . In addition, it is recommended to periodically check the level of micro- and macro-elements in the serum. Control tests, including kidney and liver function tests, complete blood count (CBC), and serum ferritin, folic acid, vitamin B 12 , vitamin D, and calcium measurements, should be performed 3, 6, and 12 months after the procedure and then at least once a year . If this has not been performed before surgery, intact parathyroid hormone levels should be checked. Serum levels of vitamin A, vitamin E, vitamin K 1 , and PIVKA-II (a protein caused by vitamin K deficiency or antagonism) should be regularly measured at regular intervals after malabsorption procedures, such as BPD/DS, or when symptoms of deficiency occur. It is recommended to monitor serum zinc, copper, and selenium levels after sleeve gastrectomy (SG), Roux-en-Y gastric bypass (RYGB), or BPD/DS. Routine monitoring of magnesium levels is not required. The patient should be monitored by a ‘bariatric team’, which comprises of surgeon, psychologist, dietitian, and others. The optimal time for a psychologist consultation is between 6 months and 12 months after surgery; at this time, the pace of weight loss slows down, and psychological support enhances the patients’ motivation. A follow-up visit with a surgeon should be scheduled 1 year after the procedure to assess weight, comorbidities, and additional blood tests. Each patient requires a mandatory gastroscopy 1 year after surgery . Long-Term Follow-Up The gastrointestinal tract in bariatric patients’ is permanently changed; thus, they require monitoring indefinitely to determine any signs of malnutrition, macro- and micro-element deficiency, weight regain, or complications, such as gastroesophageal reflux and dumping syndrome. Patients with weight regain or onset of surgery-related complications should be referred to a bariatric surgeon to determine the need for revisional surgery. Currently, it is possible to help patients using pharmacotherapy . Pharmacotherapy should be used in patients with a lack of weight loss and as a first-line treatment in patients with weight gain. Currently, available medications are glucagon-like peptide-1 receptor agonist (GLP-1), including liraglutide (3 mg once a day s.c.) and semaglutide (1.5 mg once a week, s.c.) and combination preparation containing naltrexone and bupropion (16 mg/180 mg twice a day, orally). The choice of pharmacotherapy should be individualized. Some patients, after massive weight loss, may require plastic surgery to remove excess skin. This operation should be considered at least 12 months after the surgery with at least a 6-month period of stable weight. 5.9. Effectiveness of Obesity Treatment Long-Term Monitoring Beneficial prognostic factors during treatment that indicate long-term maintenance of the effects include a greater frequency of self-monitoring of energy intake and body weight, consistency between dietary choices and weight reduction goals, lower intensity of negative mood, lower intensity of hunger and emotional eating, and regular physical activity. Therefore, it is very important to focus work with the patient on these factors during therapy. The patient should be motivated to self-monitor and make appropriate food choices, which the patient should learn with the support of a dietician and physician. A very important element regarding food choices is directing the patient’s thinking in such a way that there are no dishes and products that are completely forbidden, but there are products that can be eaten in small amounts. The patient must not think that eating a certain product is a sin or a certain product is a ‘forbidden fruit’ because this may become the reason for obsessive thinking about it. Often because of such thinking, the end of the active phase of treatment, i.e., achieving the therapeutic goal for the patient, is the limit beyond which they will be able to eat these forbidden products . Other very important elements that positively affect long-term weight loss maintenance are regular meals, eating breakfast, and choosing food with a low content of fats and simple sugars. Long-term weight maintenance is also associated with the implementation of regular physical activity accepted by the patients, and the time of it should be longer than in the phase of active treatment, as lower weight is related to decreased energy expenditure during physical activity . There is also a need to effectively treat negative emotions, intensification of hunger, and emotional eating. Both pharmacotherapy and psychotherapy should be used, and, if necessary, both of these therapeutic methods should also be used in the phase of maintaining the achieved treatment effects . Numerous data indicate that the psychological assessment of the patient’s personality is prognostic and helps to select those patients who may have problems with maintaining the therapeutic effects and who should be subject to greater supervision at this stage. A stable personality with a higher level of self-control and greater emotional maturity, as well as personality traits such as creativity, autonomy, and self-sufficiency, and an internal locus of control with a sense of self-efficacy and better self-esteem, are favorable prognostic factors. Whereas dysfunctions in social interactions, higher levels of anxiety, and avoidance of monotony are unfavorable prognostic factors . Maintaining the achieved effects is easier in patients with a more stable living situation and fewer stressful life events and receiving social support. It should be remembered that patients with more problems related to eating behaviors, life situations, and worse functioning, as well as less social support, require more intensive supervision and support and the search for individual solutions in the maintenance phase . It is also very important to teach the patient that if the disease recurs, they should immediately contact the doctor and apply the proposed treatment methods. Numerous studies have shown that prolonged therapeutic interventions and continuous professional support in the maintenance phase significantly improve long-term treatment outcomes . Establishing a therapeutic goal should meet the SMART business goal rule, i.e., specific, measurable, achievable, relevant, and timely. The overriding goal of obesity treatment is to slow down the progression of the disease, avoid relapses, and prevent the development of complications caused by excess fat in the body or reduce their severity, as well as overall improvement of the patient’s health and quality of life, and life extension. The overriding goal in patients without complications of obesity is to reduce the severity of the disease by one stage. While in patients with complications, this will be such a reduction in body weight that will contribute to a significant improvement in the control of these complications and will enable the reduction of doses and/or the number of drugs used, and in some less advanced cases will allow for the remission of complications and discontinuation of pharmacotherapy. Achieving such goals requires individual determination of the percentage reduction of body weight in relation to the initial value. The goal should always be set in such a way that the patient does not feel that it is so distant as to be almost unattainable. Therefore, it is worth setting 3–6 months stages, in which the goal is to reduce body weight by 5–10% of the initial one, followed by a 3–6 months period of maintaining the obtained results and, if necessary, another stage of 5–10% body weight reduction . It is believed that different percentages of initial body weight reduction are required to improve individual complications of obesity: Approximately 10–40% in body weight reduction in patients diagnosed with non-alcoholic steatohepatitis in the course of MAFLD. At least 5% to 15% in body weight reduction in patients diagnosed with the following: - Type 2 diabetes (lower HbA1c, reduce the number and/or doses of hypoglycemics drugs used, and remission of the disease, especially if it lasts a short time). - Dyslipidemia (decrease in blood triglycerides and non-HDL cholesterol, and increase in HDL cholesterol). - Arterial hypertension (reduction of systolic and diastolic pressure and reduction of the number and/or doses of antihypertensive drugs). - Polycystic ovary syndrome (return of ovulatory cycles and regular menstruation, reduction of hirsutism, improvement of insulin resistance, and reduction of androgen levels in the blood). At least 5% to 10% in body weight reduction is recommended in patients diagnosed with the following: - Male hypogonadism (increased testosterone levels in the blood). - Stress urinary incontinence (reduced frequency of episodes of incontinence). At least 7–8% in body weight reduction is recommended in patients diagnosed with bronchial asthma (improvement in terms of forced expiratory volume in 1 s and reduction in the severity of symptoms). At least 7–10% in body weight reduction is recommended in patients diagnosed with obstructive sleep apnea. At least 10% in body weight reduction is recommended in patients with the following: - Prediabetes (preventing the development of type 2 diabetes and improving glucose levels). - Improving female infertility (return of menstrual ovulation cycle, pregnancy, and the birth of a live newborn). - Osteoarthritis (reduction of pain and improvement of motor function); - Gastroesophageal reflux (reduced symptoms). At least 5% in body weight reduction is recommended in patients with the following: - Steatosis stage in the course of MAFLD (reducing lipid accumulation in the liver and improving metabolic function) . It is very important to set partial goals both in terms of the effect and the changes leading to them because the small-step method allows the patient to better adapt to changes and does not put pressure on them to achieve the effects . To avoid patient disappointment and discouragement, one should explain to them that it is for their health, and slow is beneficial (approx. 1 kg/week in the first month) and approx. 0.5 kg/week. in the following months), but permanent weight loss. The main reason for losing weight is improving health, not the number of kilograms lost. Slow but systematic weight loss as a result of the use of a balanced diet and increased physical activity lowers blood pressure, serum glucose, and lipid levels, improves the quality of life, and in many people with diseases accompanying, allows one to reduce the number of drugs used. Too fast, significant weight loss causes significant loss of lean mass and increases the risk of developing gallstones and fatty liver, and occurrences of the ‘yo-yo’ effect. This tool is derived from smoking addiction counseling and was also proposed many years ago for the treatment of obesity. It has been observed that the use of all elements of the five A rule significantly increases the achievement of therapeutic success. The use of rule the five A’s include: ASK—asking questions should be a motivational interview. During the interview, the patient should be made aware of the impact of their body weight on general health and quality of life. Avoid embarrassment, guilt, and stigmatization during the conversation. Always use adequate medical vocabulary and emphasize that obesity is a disease that can and should be treated. One should also avoid judging the patient during the interview. However, the assessment of the patient’s readiness for change cannot be avoided. There are many standardized methods of assessing readiness for change, but in the conditions of everyday clinical practice, it is enough to ask the patient the following five questions: (1) Does the patient want to be treated for obesity to improve their health? (2) Does the patient want to change his or her eating habits permanently and does not see it as a struggle? (3) Does the patient feel that their current way of eating is harmful to them? (4) Is the patient aware that the treatment will be long and is ready to cooperate with their doctor? (5) Will the patient try to accept the proposed treatments? If the patient is not ready to change, methods should be implemented to motivate them to make the change. In addition, the patient’s sense of self-efficacy should be built by explaining to them that he is not expected to make a complete revolution in their life and that treatment will be based on small, gradual changes. 2. ASSESS—assessment of the causes of weight gain, health status, and occurrence of complications caused by excess fat in the body. It is very important to correctly and fully determine the cause of weight gain, especially emotional eating and eating disorders (BED and NES). The patient’s physical health can be assessed on the basis of a 100-point visual analog scale (VAS). Screening for depression (the Beck scale) and the Hospital Anxiety and Depression Scale (HADS) should also be performed. Anamnesis should also be taken with the patient regarding chronic diseases, and in the absence of a prior diagnosis of obesity complications, their diagnosis should be undertaken. 3. ADVICE—presenting treatment options that can be used in a particular patient. In the selection of therapeutic methods, the primary cause of obesity should be considered, followed by the stage of the disease and the occurring complications. It is very important that, during the conversation with the patient about the recommendations, they have a sense of understanding. In addition, the patient should be made aware that the treatment process will be long and requires commitment from them and that the doctor and other members of the therapeutic team are there to help them overcome difficulties. The patient should be presented with all therapeutic options that should be used in their case and discuss the benefits and possible risks associated with them. 4. AGREE—obtaining the patient’s consent to the proposed therapeutic goal and treatment plan. It is necessary to be aware that it is the patient who implements the doctor’s recommendations; therefore, they cannot be arbitrary and must consider the patient’s capabilities and the degree to which they are willing to comply with the recommendations. In other words, this stage is a compromise between what the patient should do, according to the doctor, and what the patient can and wants to achieve. At this stage, negotiations should be conducted with the patient based on respect for their autonomy and their right to choose. However, the choice should be conscious, i.e., the consequences should be explained to the patient. Obtaining the patient’s acceptance of the proposed therapeutic goal and treatment plan may require many discussions. This should not discourage the doctor from taking them. In addition, the physician must be willing to modify his recommendations based on the needs and capabilities of the patient. It is very important at this stage to work on realizing the patient’s expectations regarding weight loss. The patient should also be made aware that meeting the behavioral change goals is more important than weight loss itself because this will ultimately help them achieve the intended weight reduction. Success for each patient will have a different dimension, but it is important that the patient focuses on improving mental and physical health, not on the number of kilograms lost. 5. ASSIST—supporting the patient in the therapeutic process. After agreeing on their therapeutic goal, the doctor should help the patient identify barriers that may hinder treatment (social, medical, emotional, and economic) and factors that facilitate treatment (motivation and social support). The role of the doctor is to identify the causes of the disease, educate, recommend adequate therapeutic methods, and support the patient in their implementation. An important element of support is setting the schedule of follow-up visits, determining their frequency, and informing the patient what will be checked during the visit, which will make it easier for the patient to implement the recommendations. The schedule should specify the number of visits necessary to achieve the therapeutic goal, minimum and maximum time intervals between visits (the exact date of the next visit should be determined at the previous visit), parameters that will be checked during the visit, and what should be brought to the next visit (e.g., physical activity and results of additional tests). At each follow-up visit, new problems that make it difficult for the patient to comply with the recommendations should be identified, and solutions or other therapeutic methods should be introduced to eliminate the existing problems . The term ‘diet’ defines a way of eating; therefore, everything a person eats is a diet. However, in the common consciousness, diet is associated with a special way of nutrition (elimination of many foods), which—used for several days or weeks—will lead to weight loss body, after which one can eat as before . That is why it is better to talk to the patient not using the word ‘diet’, just to make a permanent change in eating habits. The energy content of the diet should be determined individually. The simplest is to apply a formula to determine the total energy expenditure. Basic energy expenditure (basal metabolic rate (BMR) × coefficient physical activity) BMR: For men = 11.6 × body weight (kg) + 879 kcal; For women = 8.7 × body weight (kg) + 826 kcal. The physical activity factor: For people who lead a sedentary lifestyle—1.3; Moderately active—1.5; Regularly physically active—1.7 . From the calculated energy expenditure determining the energy content of the diet, it is necessary to subtract about 500–600 kcal to obtain approximately 0.5 kg weight loss per week or 1000 kcal for a loss of approx. 1 kg per week. Reassessment of the energy content of the diet should be made in accordance with the above data each time the body weight stops reducing . A diet should be varied and contain all the necessary food ingredients. In the selection of recommended foods, individual patients’ preferences should be considered. The proportion of food macronutrients recommended by the WHO is as follows: about 20% of the energy content of the diet should be proteins, about 25% fats, and about 55% carbohydrates . No more than 10% of energy may come from fats containing saturated fatty acids (SFA). At least 6% of this energy should provide polyunsaturated fatty acids (PUFA), and the rest should provide monounsaturated fatty acids with cis configuration (MUFA). It should be noted that monounsaturated fatty acids with the configuration trans (trans fatty acids—TFA) should not exceed 1% of the incoming energy from fats . The main sources of SFA in the diet are butter and lard, beef tallow, as well as oils: coconut and palm, and also cocoa, nut, and vegetable butter (these kinds of butter are the main ingredients of chocolate) . The main food sources of MUFA are olive oil and other vegetable oils . TFAs are mainly delivered from fast food, cakes, and cookies that contain industrially hydrogenated vegetable oils included in the composition of shortenings, fries, and margarine . Note that in intake, PUFA ω-6 and ω-3 should be maintained at a proper 4:1 ratio. Foods rich in ω-3 fatty acids are herring, tuna, salmon, sardines, mackerel, trout, and oil fish. The main food sources of ω-6 acids (>60%) are oils: soybean, sunflower, safflower, evening primrose, and oils from grape seeds, poppy seeds, borage of medicine, and blackcurrant. Approximately 40–50% of these fatty acids include oil, wheat germ, corn, nuts, walnuts, cottonseed, and sesame . Simple carbohydrates (e.g., glucose, fructose, lactose, xylitol, and sucrose) should provide <10% of energy. Dietary fiber should provide wholegrain bread, other grain products and vegetables, fruits, and plant legumes . Reducing the amount of food alone should not be recommended, but most of all, changing its quality (e.g., consumption of fewer dairy products fat, cooking or roasting meat instead of frying, cooking soups on vegetable stocks, without roux and with yogurt instead sour cream, and not using mayonnaise for salads). The patient should be made aware from the outset that the changes it introduces must be permanent. However, this does not mean that there are any foods that they will not be able to eat until the end of their life. If they eat a high-energy product very rarely, e.g., once a quarter, this will not cause weight gain. Regular consumption should be recommended (at similar times) 3–5 meals a day, finishing eating with a feeling of incomplete satiety, eating between meals (in the case of not feeling very hungry, one can drink a glass of water or eat a vegetable, not fruit), not eating food while watching TV or reading or computer use, and coping with stress other than through overeating. The distribution of energy when eating five meals: Breakfast—25%; Second breakfast—15%; Lunch—35%; Afternoon tea—10%; Dinner—15%. The distribution of energy when eating three meals: Breakfast—40%; Lunch—40%; Dinner—20% . Popular ‘miracle diets’ are not recommended. Both high-fat and high-protein diets, with significantly higher-than-recommended amounts of cholesterol, promote the development of atherosclerosis. Moreover, they are ketogenic diets, which on the one hand, have an effect of inhibiting the feeling of hunger, but on the other hand, lead to acidification of the body and electrolyte disorders. High-protein diets also contain higher-than-recommended phosphate content, which causes calcium malabsorption and may develop osteoporosis with prolonged use. Low-energy and very low-energy fat-free diets cause significant weight loss, which promotes the ‘yo-yo’ effect and also has a ketogenic effect . Recently published studies indicate that the use of ‘miracle diets’ is a risk factor for the development of emotional eating and eating disorders . Lifestyle-changing therapy for patients who are overweight or obese should contain behavioral intervention that improves adherence to reducing dietary recommendations energy of meals and affects increased physical activity. Behavioral intervention may include self-control in terms of body weight and consumption of meals and physical activity, clear and precise defining the goals of therapy and education on obesity, nutrition, and physical activity, individual and group conversation, stimulus control, systematic solving emerging problems, reducing stress, cognitive behavioral therapy, motivational interview, behavioral agreement, psychological counseling, and social support mobilization . If the patient fails to achieve a 2.5% reduction in body weight in the first month of treatment, intervention and support should be stepped up to behavioral, as early body weight reduction is a key, long-term indicator of success in losing body weight . The GP should discuss with the patient realistic treatment goals. The goal is to lose weight by about 10% in 3–6 months, then maintain this reduced weight for several months, and then act to reduce body weight further. The family doctor should also explain to the patient that: - Losing weight too quickly is not beneficial for health (risk of developing liver steatosis and gallstones) and is associated with risks such as the ‘yo-yo’ effect (loss of lean mass and lowering the level of basic expenditure energy); - The use of a very restrictive diet may cause deficiencies in vitamins and microelements; - Treatment is not a short period of dieting, but a permanent change in lifestyle, including habits, nutrition, and increasing physical activity, and any unfavorable change in this aspect will lead to disease relapse; - The real success is long-term maintenance weight loss of at least 10% from the initial body weight, not the number of kilograms that the patient will be rid of. The family doctor should also conduct an analysis of the patient’s eating habits that must be eliminated: - Eating while watching TV; - Calming oneself with food; - Eating foods with the wrong composition; - Eating in a hurry; - Eating under the influence of the greatest hunger; - Eating between meals; - Irregular eating habits. The GP should advise the patient to keep a food diary for at least 3 months. In the diary, before eating a meal, the patient records the time of consumption, composition, weight, and caloric value. Remember to save as well all fluids consumed except water, unsweetened coffee, and tea. It is also worth recording the patients’ physical activity as possible problems may arise with insufficient lifestyle changes . Aerobic exercise should be recommended (prescribed) to patients who are overweight and obese as a part of lifestyle intervention. It may be initially advisable to recommend a gradually increased amount and intensity of exercise; ultimately, this should be at least 150 min per week of moderate-intensity exercise divided into 3–5 sessions. In the treatment of weight gain and its prevention in a patient implementing the program, 60–90 min of moderate daily exercise in leisure time is recommended for weight loss. A dynamic, aerobic effort is recommended, involving large muscle groups. Recommended forms of physical activity for obese adults: brisk walking, cycling, swimming, water exercises, and Nordic walking. Resistance exercise should be recommended (prescribed) to patients undergoing an intervention weight loss for supporting loss of body fat while maintaining lean mass; ultimately, these should be single sets of engaging resistance exercises for major muscle groups performed 2–3 times a week. In addition to aerobic exercise, the patient should do resistance exercises 2–3 times a week 12–15 repetitions each, with a commitment of 30–50% of maximum muscle strength. The target training heart rate should be 60–70% maximum heart rate (220 minus age) in people without cardiovascular disease, and in people with cardiovascular disease, 40–70% heart rate reserve (highest heart rate achieved during the exercise test minus resting heart rate) plus resting heart rate. Absolute contraindications to treatment movement are decompensated circulatory failure, unstable coronary artery disease, and respiratory failure. Carefully, under medical and rehabilitation supervision, physical activity should be recommended in patients with a BMI > 40. All overweight or obese patients, apart from physical exercise, should be encouraged to spend their free time actively to reduce their sedentary lifestyle. In order to improve the engagement for an individual’s plan of activity, the involvement of trained and certified fitness professionals should be considered . All patients diagnosed after screening for depression or anxiety should be referred to a psychologist. Indications for patient referral to a psychologist dealing with eating disturbances also include the following: - Emotional eating; - Low self-esteem; - Suspected NES; - Suspected BED; - Suspected food addiction. The main recommendation is cognitive behavioral therapy (CBT). It is a combination of behavioral therapy (oriented to change behavior) and cognitive trends, referring to the patient’s perception and understanding of the world, their thoughts, beliefs, imagination, and goals. CBT helps the patient to identify and possibly change their own cognitive constructs (concerning, for example, themself, life situation, illness, and future) and shape new behaviors and skills that will be helpful in achieving their assumed goals. The beliefs subjected to analysis and modification primarily relate to issues related to obesity, its consequences, and the possibility of regulating body weight. Changing behaviors, in turn, concerns those activities that are directly related to weight loss and maintaining the achieved results. CBT in the treatment of obesity should include the following elements: - Self-monitoring (e.g., keeping a food diary); - Techniques to control the eating process (e.g., slow chewing); - Control of stimuli and their reinforcement or reduction (e.g., shopping according to a list); - Additional cognitive techniques; - Relaxation techniques . Another useful treatment of obesity is interpersonal therapy (IPT), which combines elements of cognitive behavioral and psychodynamic approaches (attachment theory). Interpersonal therapy is considered to be particularly effective in treating BED . Many studies also confirm the effectiveness of psychodynamic therapy in the treatment of patients with obesity. This trend primarily focuses on early childhood experiences, unconscious drives, internal conflicts, as well as mental defense mechanisms. It aims to thoroughly analyze the mechanisms of the patient’s mental functioning and gain insight due to the reference of subjective tools (e.g., interpretations) and phenomena (e.g., transference and countertransference, free association, dreams) . There is no drug that can cure obesity. Currently, available drugs can only support the treatment of obesity through various mechanisms of action. Therefore, pharmacotherapy for overweight and obesity should be used only as an adjunct to lifestyle therapy and not alone . Pharmacotherapy in the treatment of obesity should be used chronically as long as it is effective and well tolerated because obesity is a chronic disease. Short-term pharmacotherapy use (3–6 months) does not cause long-term health benefits and cannot be recommended . Short-term pharmacotherapy use may be associated with short-term weight loss followed by the ‘yo-yo’ effect and negatively affected health . The choice of pharmacotherapy should be individual because of the heterogeneity of responses to obesity interventions, including medication . The current standard selection of pharmacotherapy includes physician/patient preference, medication interaction, comorbidities, efficacy, and risk of potential adverse events . However, new data support the concept that the primary cause of obesity development and the drug’s mechanism of action should be the first criterion for choosing a drug . This approach has already been included in the guidelines of seven Polish Scientific Associations and the Canadian guidelines . There are currently four drugs registered in the European Union that help reduce body weight: orlistat, a drug composed of hydrochloride naltrexone and hydrochloride bupropion, and long-acting GLP-1 analogs (liraglutide and semaglutide). Pharmacological treatment is indicated in patients with obesity (BMI ≥ 30 kg/m 2 ) or overweight (BMI ≥ 27 kg/m 2 ) with ≥1 complication of obesity in a patient in whom non-pharmacological treatment has failed to achieve the therapeutic goal . Pharmacotherapy can also be used at the stage of maintaining the effects achieved over time with non-pharmacological treatment and after surgical treatment of obesity . If, after 3 months of using pharmacotherapy, weight loss is less than 5% in patients without a diagnosis of type 2 diabetes and less than 3% in people diagnosed with this disease (counting weight loss from drug application), its continuation is unjustified. However, it should be stressed that if the use of pharmacotherapy had no effect, do not wait until 3 months have passed but discuss the implementation of recommended diet and physical activity. In addition, psychological problems should be analyzed, and the use of the prescribed pharmacotherapy checked . Orlistat (tetrahydrolipstatin, a derivative of lipostatin produced by Streptomyces toxytricini ) is used orally at a dose of 120 mg three times a day before main meals. In randomized trials, taking orlistat for one year resulted in a weight loss of ~3 kg more than in the placebo group. This drug inhibits the activity of lipases in the gastrointestinal tract: gastric, pancreatic, and intestinal, and prevents the digestion and absorption of some of the fats taken with food. It does not affect the feeling of satiety, hunger, or appetite. It is absorbed from the gastrointestinal tract in trace amounts (1% of the dose), and its metabolites are inactive; therefore, it has no systemic effect. The use of orlistat is justified only in people who prefer fatty foods and have problems with modifying eating habits and who are aware of the drug’s mechanism of action and possible side effects. Consumption of food that contains too many fats results in increased frequency of bowel movements, loose and liquid stools, fatty stools, an urgency to defecate, fecal incontinence, bloating, and abdominal pain. The patient should be warned that these are the effects of nutritional errors, and reducing the consumption of fats will eliminate their occurrence. Patients using lipophilic drugs should wait for ≥2 h between taking them and using orlistat. Contraindications are hypersensitivity to the drug, pregnancy and lactation, cholestasis, and chronic malabsorption syndromes . Indications for the use of orlistat: Obesity (BMI ≥ 30 kg/m 2 ); Overweight (BMI ≥ 27 kg/m 2 ), with obesity complications, such as hypertension, lipid disturbances, ischemic disease, myocardial infarction, type 2 diabetes, sleep apnea, or PCOS. Contraindications to the use of orlistat: Chronic malabsorption syndrome; Cholestasis; Pregnancy; Breast-feeding; Hypersensitivity to orlistat. Combined preparation containing two active substances, naltrexone hydrochloride and bupropion hydrochloride, is in one tablet. The prolonged-release tablet contains 7.2 mg of naltrexone and 78 mg of bupropion (equivalent to 8 mg of naltrexone hydrochloride and 90 mg of bupropion hydrochloride). Treatment begins with taking one tablet in the morning for a week, then the next week—one tablet in the morning and one in the evening. In the next two tablets, one in the morning and one in the evening, and in the 4th week, the target dose is introduced as two tablets, one in the morning and two in the evening (the daily target dose is 28.8 mg naltrexone and 312 mg bupropion, equivalent to 32 mg naltrexone hydrochloride and 360 mg bupropion hydrochloride). If, after 16 weeks of using the preparation, the patient’s body weight has not decreased by ≥5% of the initial value, the drug should be discontinued . Naltrexone and bupropion act on the same regions of the central nervous system (the arcuate nucleus of the hypothalamus and the mesolimbic dopaminergic reward system), and the combination has a hyperadditive effect on the regulation of food intake. This allows them to be used in lower doses, which reduces the risk of side effects and promotes better tolerance of treatment . Bupropion is a dopamine and norepinephrine reuptake inhibitor (NDRI) and a non-competitive nicotinic receptor antagonist of the β-ketoamphetamine class. It is used alone to treat depression, seasonal affective disorder, and nicotine addiction. Naltrexone, in turn, is an antagonist of the µ-opioid receptor, to a lesser extent of the κ-receptor, and to an even lesser extent of the γ-receptor. At a dose of 50 mg, it is used in the treatment of non-opioid addictions, primarily from alcohol (supporting abstinence by reducing the need to drink) . Bupropion in the arcuate nucleus of the hypothalamus stimulates the activity of POMC-secreting neurons and, as β-ketoamphetamine, the release of cocaine- and amphetamine-regulated transcript (CART), which in turn stimulates the release of α-melanocortin (α-MSH), which binds to melanocortin type 4 receptors (MC4-R), stimulating the feeling of satiety. The feedback that inhibits the release of POMC is the increase in the release of β-endorphin by this neurotransmitter; however, naltrexone—by blocking the µ-opioid receptors—inhibits the feedback loop and, as a result, prolongs the feeling of satiety. Naltrexone and bupropion also reduce food intake stimulated by appetite (the hedonistic search for a specific food not to satisfy hunger but to feel pleasure from its consumption), which is the responsibility of the reward system along with its main neurotransmitters: dopamine, norepinephrine, and endogenous opioids. Bupropion inhibits the reuptake of dopamine and noradrenaline (inhibition of the drive to seek food), and naltrexone blocks opioid receptors, which stimulates the secretion of endogenous opioids (reduces the ‘liking’ of tasty food) . In clinical trials, the most common adverse drug reactions to this combination product were nausea, vomiting, headache, dizziness, insomnia, and dry mouth. They usually spontaneously disappear within the first 4 weeks of treatment . Indications for the use of the drug include supplementing with this drug a diet with reduced energy content with increased physical activity for weight loss in adult patients (≥18 years old) with a baseline BMI of either: BMI ≥ 30 kg/m 2 (obesity); BMI 27 kg/m 2 to <30 kg/m 2 (overweight) if the patient has one or more complications of obesity (e.g., type 2 diabetes, dyslipidemia, compensated hypertension). Contraindications: Hypersensitivity to any substance active or auxiliary; Uncontrolled high blood pressure; Current epilepsy or seizures in the interview; A tumor of the central nervous system; The period immediately following an abrupt withdrawal from alcohol or benzodiazepines in an addicted person; History of bipolar disorder; Taking bupropion or naltrexone for another indication other than weight loss; Bulimia nervosa or anorexia nervosa now or in the past; Addiction to long-term use of opioids or opiates (e.g., methadone) and shortly after their discontinuation in the addicted person; Taking monoamine oxidase inhibitors (MAOI); Severe liver problems; End-stage renal failure or severe disorders of kidney function; Pregnancy and breastfeeding. The use of this drug as the first choice is recommended in patients diagnosed with emotional eating, BED, NES, food addiction, depression, and smoking cessation . Liraglutide is a long-acting GLP-1 analog that is used s.c. 1 × daily at the target dose of 3 mg/d. Treatment of obesity is started with a dose of 0.6 mg/d and increased weekly by 0.6 mg/d until a dose of 3 mg/d is reached. If the drug is poorly tolerated for another 2 weeks after increasing the dose, discontinuation of the drug should be considered. Treatment should be discontinued if, after 12 weeks of use at a dose of 3.0 mg/d, the patient has not lost ≥5% of the initial body weight. Liraglutide, such as natural GLP-1, affects target cells, producing effects analogous to the natural hormone. The main mechanism leading to weight loss depends on the direct activation of GLP-1 receptors located in the central nervous system and the downstream activation of GLP-1 afferents, including neurons of the autonomic nervous system. GLP-1 receptors are found in many structures of the central nervous system, including solitary tract nuclei and POMC/CART anorexigenic neurons of the hypothalamus, and their activation is responsible for the feeling of satiety. The concomitant inhibition of hunger is the result of indirect inhibition of neurotransmission in NPY- and AgRP-expressing neurons through γ-aminobutyric acid (GABA)-dependent signaling. The additional mechanism of increased satiety is slowing down gastric emptying . Experimental studies in rats also indicate that reduced food intake may be related to nausea, which is induced by the effect of liraglutide on GLP-1 receptors in the solitary tract nucleus . Liraglutide also acts in many peripheral tissues. The incretin effect exerted by GLP-1 agonists was the first to be discovered, including GLP-1-stimulated increased glucose-dependent insulin secretion from pancreatic β-cells, which is used in the treatment of type 2 diabetes, where the target dose is 1.8 mg/d (liraglutide has been approved under a different trade name for the treatment of diabetes) . Based on a trial performed on patients with type 2 diabetes and treated with liraglutide at a dose of 1.8 mg/d, this treatment did not increase the risk of cardiovascular complications . However, there are no prospective clinical trials conducted in patients with obesity but without type 2 diabetes and treated with liraglutide in a dose of 3.0 mg/d. The improvement of cardiometabolic parameters in people without type 2 diabetes primarily depends on the reduction of body weight. Indications are similar to indications for the use of other drugs supporting the treatment of obesity; the use of liraglutide 3 mg in the treatment of overweight and obesity should be considered as an adjunct to lifestyle modification in patients: (1) With a BMI ≥ 30 kg/m 2 (obesity); (2) With a BMI of 27–30 kg/m 2 (overweight) if accompanied by ≥1 of complications related to excessive body weight (including prediabetes or type 2 diabetes, hypertension, lipid disorders, or obstructive sleep apnea). The effectiveness of treatment is assessed after 12 weeks of using liraglutide in a full dose of 3.0 mg 1 × daily s.c.; it may be continued if body weight has decreased by ≥5%. The most common side effects are nausea, vomiting, diarrhea, and constipation, which are usually temporary . This drug should be the first choice in the treatment of obesity in patients with prediabetes or type 2 diabetes, as well as clinical features of insulin resistance after the exclusion of emotional eating, BED, food addiction, and NES . Studies conducted using functional magnetic resonance imaging confirmed the lack of influence of GLP-1 analogs on the reward system and their lower effectiveness in people with emotional eating to stimulate the reward system . Contraindications for the use of liraglutide, apart from hypersensitivity to the active substance or excipients, include a family history of medullary thyroid cancer, a history of pancreatitis, and pregnancy. Semaglutide is a very long-acting GLP-1 analog that is used s.c. once a week at a target dose of 2.4 mg/week. Treatment of obesity starts with a dose of 0.25 mg/week. After 4 weeks, it is increased to 0.5 mg/week, after another 4 weeks to 1 mg/week, after another 4 weeks to 1.7 mg/week, and a further 4 weeks to the target dose of 2.4 mg/week. If the drug is poorly tolerated 2 weeks after increasing the dose, discontinuation should be considered. If severe gastrointestinal symptoms occur, consideration should be given to delaying dose escalation or reverting to the previous dose until symptoms have improved. Due to the long half-life, the drug should be discontinued 2 months before the planned pregnancy. The mechanism of action of semaglutide is similar to that of liraglutide . Similar to liraglutide was originally registered for the treatment of type 2 diabetes, where the target dose is 1 mg/week (for the treatment of diabetes, semaglutide has been registered under a different trade name). To date, no prospective clinical trials have been conducted to evaluate the effect of semaglutide on cardiovascular risk in non-diabetic subjects. The improvement of cardiometabolic parameters in people without type 2 diabetes primarily depends on the reduction of body weight. Indications are similar to those for the use of other drugs supporting the treatment of obesity; the use of semaglutide 2.4 mg in the treatment of overweight and obesity should be considered as an adjunct to lifestyle modification in patients: (1) With a BMI ≥ 30 kg/m 2 (obesity); (2) With a BMI of 27–30 kg/m 2 (overweight) if accompanied by ≥1 of complications related to excessive body weight (including prediabetes or type 2 diabetes, hypertension, lipid disorders, obstructive sleep apnea, or cardiovascular disease). The most common adverse drug reactions are nausea, vomiting, diarrhea, constipation, and abdominal pain, which are usually temporary. Due to the rapid initial weight loss, there is also a risk of developing gallstones . Contraindications to the use of semaglutide, apart from hypersensitivity to the active substance or excipients, include a family history of medullary thyroid cancer, history of pancreatitis, and pregnancy. If patients have psychogenic eating disorders, pharmacotherapy with liraglutide and semaglutide may be less effective, and it is suggested that a combination of naltrexone and bupropion should be considered first. Some authors also propose considering the use of polytherapy with liraglutide and naltrexone with bupropion . The safety of the combination of naltrexone/bupropion and a long-acting GLP-1 analog was confirmed in the recently published post hoc analysis of the LIGHT study . It should be noted that a systematic review and meta-analysis of randomized placebo-controlled trials showed that the use of GLP-1 analogs is associated with an increased risk of developing gallstones or cholelithiasis, and this risk increases with higher doses, longer duration of use, and use for weight loss . In addition, the analysis of cases reported in the European Pharmacovigilance Database showed that the use of GLP-1 analogs is associated with a higher risk of developing thyroid cancer . In a recently published study, which analyzed a total of 2526 cases of patients with thyroid cancer compared with 45,184 people from the control group, it was shown that the use of GLP-1 analogs for 1–3 years was associated with an increased risk of all thyroid cancers . Attention should also be paid to the increased risk of tachycardia and arrhythmia during treatment with semaglutide . According to the ESE guidelines from 2020, it is not recommended to use metformin solely for weight reduction (no registration in this indication), and indications for the use of this drug are prediabetes and type 2 diabetes . Other drugs: there is insufficient evidence to support the use of herbal medicines, dietary supplements, probiotics, or homeopathy in the treatment of obesity. The results of single studies indicate that the use of fiber preparations containing soluble and insoluble fibers may increase the effects of non-pharmacological treatment of obesity. 5.8.1. Requirements for Reference Centers Bariatric surgeries should be performed in centers specializing in this type of surgery, able to choose the optimal (medical indications and patient’s preferences) surgical method together with the patient, having substantive preparation and appropriate equipment. This is not only the equipment of the operating room (e.g., operating table and laparoscopic tools) but also the equipment of the ward (hospital beds with a load capacity of 250–300 kg, couches, wheelchairs, chairs, and bariatric platforms) and sanitary facilities (shower cabins adapted for people with obesity equipped with appropriate handles and handrails) . 5.8.2. Qualification Proper qualification for surgical treatment of obesity is one of the key factors affecting its results. It is recommended that non-surgical treatment should be attempted before considering surgery . The primary criterion for qualifying for bariatric surgery is the patient’s BMI. Until recently, the second crucial criterion assessed during qualification was the patient’s age . Currently, there is no age limit for patients undergoing bariatric surgery; however, a careful selection of older patients is recommended, in whom frailty assessment is critical, which, more than age, is associated with a higher rate of postoperative complications. It is worth noting that before surgery, it is recommended to lose 5–10% of body weight (among others, it has a positive effect on the results of bariatric treatment and reduces the risk of perioperative complications) . Eligibility criteria based on BMI: BMI > 40.0 kg/m 2 ; BMI 35.0–39.9 kg/m 2 in a patient with obesity complications (e.g., type 2 diabetes, hypertension, severe joint diseases, dyslipidemia, a severe form of OSA). The latest guidelines recommend surgery in patients with BMI in this range, regardless of obesity complications; BMI 30.0–35.0 kg/m 2 and uncontrolled type 2 diabetes despite appropriate pharmacological treatment . Contraindications to bariatric procedures: Mental disorders—personality disorders, severe depression; Alcoholism; Drug abuse; Eating disorders; No possibility of proper, long-term postoperative care; Poor long-term prognosis due to life-threatening diseases . The final qualification of the patient for the operation must be multidisciplinary. It is a decision of a team of specialists experienced in the treatment of obesity: a surgeon, an internist, an anesthesiologist, a psychologist or a psychiatrist, a dietitian, a physiotherapist, and, if necessary, a cardiologist, a pulmonologist, a gastroenterologist, and a neurologist . The optimal time to prepare the patient for surgery should not be shorter than three months but longer than 6–12 months . The preoperative assessment of patients scheduled for bariatric surgery should be much broader than any other major abdominal surgery. The success of surgical treatment of obesity requires a good understanding of the entire treatment process by the patient, not only the surgery itself. Therefore, the patient must be provided with information on the benefits and consequences and the risks associated with the operation. A key, unfortunately often overlooked, aspect when qualifying a patient for surgery is a psychological consultation (comprehensive assessment sometimes requires several meetings with a psychologist), which includes behavioral, nutritional, and family psychological assessment, and personality factors should be an integral part of the patient’s preoperative assessment . Working with a psychologist is one of the elements aimed at increasing the safety and effectiveness of surgical treatment by identifying specific areas to create an individually tailored treatment plan. 5.8.3. Types of Operation There are at least a dozen different types of operations to choose from, more or less changing or modifying not only the anatomy but also the physiology of the digestive system, and characterized by a different number and type of long-term complications. The common feature of all surgical methods is the preferred access method—laparoscopy 2D or 3D . Detailed qualification criteria for a given type of bariatric operation go beyond the thematic framework of this study. It should be emphasized that bariatric treatment is personalized, and each element (mainly surgical procedures) should be individually selected for the patient. 5.8.4. Post-Treatment Monitoring and Intervention In-Hospital Perioperative care for patients undergoing bariatric treatment should be organized according to the Enhanced Recovery After Bariatric Surgery principles . Patients with morbid obesity have an increased risk of partial atelectasis in the distal alveoli. In the postoperative period, in the recovery room, it is recommended to administer supplemental oxygen . For patients diagnosed with obstructive sleep apnea, it is necessary to use CPAP, i.e., breathing support that prevents the collapse of the alveoli. In the postoperative period, an important aspect is appropriate respiratory rehabilitation and breathing exercises. Each patient should be mobilized on the first day after returning from the post-operative observation unit. Patients can drink fluids after returning from the recovery room . On the first postoperative day, the patient orally takes fluids (no daily volume limit) and oral nutritional support (ONS). Patients are encouraged to drink fluids while walking around the hospital ward (simultaneous activation). In the following days, gradually expand the diet. Each patient receives proton pump inhibitors (PPI) twice a day, antithrombotic prophylaxis, and non-opioid analgesia until discharge. Discharge occurs on either postoperative day 1 or 2 upon meeting specific discharge criteria: Patient tolerates oral diet and drinks at least 1000 mL of fluids per day; Does not require intravenous fluids; Postoperative pain is manageable with oral medication; The level of physical activity is similar to that before the operation; After discharge, the patient will remain under the care of third parties and, if necessary, contact with the treatment center is ensured; There were no complications that required hospitalization . Upon discharge, patients receive a follow-up visit plan for 1 year, a baseline dietary plan, and a prescription for PPI, antithrombotic prophylaxis, vitamin supplementation, and ursodeoxycholic acid (gallstone prevention) . Perioperative Monitoring for up to 30 Days The wound should be kept clean and washed every day with a disinfectant and dressing applied. Stitches should be removed 7–10 days after the surgery; for patients with diabetes, this time may be longer. It can be performed by a primary care physician or in a surgical outpatient clinic. Patients with fever, vomiting, wound discharge, abdominal pain requiring opioids, and dehydration symptoms should be referred to a surgical unit. A follow-up visit with a surgeon is necessary during the first 30 days. Every patient is prescribed low-molecule heparin injections. There are no clear standards for how long the antithrombotic prophylaxis should be administered; however, it should not be shorter than 7 days after discharge . PPI should be taken twice a day for 30 days. A longer period should be considered in patients with symptoms of gastroesophageal reflux after surgery. Rapid weight loss is associated with an increased risk of the development of gallstones. Recently, several randomized and observational studies have shown that the postoperative supply of ursodeoxycholic acid significantly reduces the risk of the development of biliary stones. The debate regarding the duration of such prophylaxis and the dose is still ongoing; however, at the moment, it is advisable to consider the use of 500 mg ursodeoxycholic acid in a divided dose for 6 months . From 2–4 weeks, the patient can take solid foods, depending on tolerance. Quick and effective activation of the gastrointestinal tract after bariatric surgery would not be possible without proper patient education in the preoperative period. In the perioperative and postoperative periods, the patient should also be consulted by a dietitian who will coordinate further nutrition of the patient. Due to the initial difficulty in taking food, patients need to have an adequate protein supply of about 1–1.5 g of protein per kilogram of ideal body weight. Initially, the most important aspect of nutrition for patients after surgery is an adequate amount of fluids to avoid dehydration. An operation with a large malabsorptive effect may require an even greater supply of protein. Each patient, during the first 30 days, should have a follow-up visit with a dietitian to assess their diet plan. Patients should resume physical activity straight after surgery. For 6 weeks, patients should not perform exercises involving the abdomen. Preferably, physical activity should be planned and supervised by a physiotherapist. Monitoring during the First Year after Surgery Patients after bariatric surgery require follow-up visits to determine weight loss, remission of obesity complications, complications associated with surgery, assessment of nutritional status, and potential qualification for revisionary surgery. The success of weight loss should be assessed as a % of excessive weight loss (%EWL). More than 50% EWL is considered a success. Obesity complications, such as type 2 diabetes and hypertension, need to be closely monitored by primary care physicians or specialists to reduce the dose of medication at a proper time. The remission should be assessed according to treatment standards for specific diseases. Female patients of childbearing age should be advised to be on medically effective contraception for 12 months following the surgery—pregnancy during the time of weight loss is not recommended. Malnutrition after bariatric surgery is common. It partly depends on the type of surgery performed (it is more common after RYGB than SG) and also on the initial nutritional status of the patient. People with obesity are usually deficient in vitamins, particularly vitamin D, B 12 , thiamine, and folic acid, as well as calcium, iron, zinc, and copper. In recent years, easily absorbable products for bariatric patients have appeared on the market, which contain all the necessary ingredients . In addition, it is recommended to periodically check the level of micro- and macro-elements in the serum. Control tests, including kidney and liver function tests, complete blood count (CBC), and serum ferritin, folic acid, vitamin B 12 , vitamin D, and calcium measurements, should be performed 3, 6, and 12 months after the procedure and then at least once a year . If this has not been performed before surgery, intact parathyroid hormone levels should be checked. Serum levels of vitamin A, vitamin E, vitamin K 1 , and PIVKA-II (a protein caused by vitamin K deficiency or antagonism) should be regularly measured at regular intervals after malabsorption procedures, such as BPD/DS, or when symptoms of deficiency occur. It is recommended to monitor serum zinc, copper, and selenium levels after sleeve gastrectomy (SG), Roux-en-Y gastric bypass (RYGB), or BPD/DS. Routine monitoring of magnesium levels is not required. The patient should be monitored by a ‘bariatric team’, which comprises of surgeon, psychologist, dietitian, and others. The optimal time for a psychologist consultation is between 6 months and 12 months after surgery; at this time, the pace of weight loss slows down, and psychological support enhances the patients’ motivation. A follow-up visit with a surgeon should be scheduled 1 year after the procedure to assess weight, comorbidities, and additional blood tests. Each patient requires a mandatory gastroscopy 1 year after surgery . Long-Term Follow-Up The gastrointestinal tract in bariatric patients’ is permanently changed; thus, they require monitoring indefinitely to determine any signs of malnutrition, macro- and micro-element deficiency, weight regain, or complications, such as gastroesophageal reflux and dumping syndrome. Patients with weight regain or onset of surgery-related complications should be referred to a bariatric surgeon to determine the need for revisional surgery. Currently, it is possible to help patients using pharmacotherapy . Pharmacotherapy should be used in patients with a lack of weight loss and as a first-line treatment in patients with weight gain. Currently, available medications are glucagon-like peptide-1 receptor agonist (GLP-1), including liraglutide (3 mg once a day s.c.) and semaglutide (1.5 mg once a week, s.c.) and combination preparation containing naltrexone and bupropion (16 mg/180 mg twice a day, orally). The choice of pharmacotherapy should be individualized. Some patients, after massive weight loss, may require plastic surgery to remove excess skin. This operation should be considered at least 12 months after the surgery with at least a 6-month period of stable weight. Bariatric surgeries should be performed in centers specializing in this type of surgery, able to choose the optimal (medical indications and patient’s preferences) surgical method together with the patient, having substantive preparation and appropriate equipment. This is not only the equipment of the operating room (e.g., operating table and laparoscopic tools) but also the equipment of the ward (hospital beds with a load capacity of 250–300 kg, couches, wheelchairs, chairs, and bariatric platforms) and sanitary facilities (shower cabins adapted for people with obesity equipped with appropriate handles and handrails) . Proper qualification for surgical treatment of obesity is one of the key factors affecting its results. It is recommended that non-surgical treatment should be attempted before considering surgery . The primary criterion for qualifying for bariatric surgery is the patient’s BMI. Until recently, the second crucial criterion assessed during qualification was the patient’s age . Currently, there is no age limit for patients undergoing bariatric surgery; however, a careful selection of older patients is recommended, in whom frailty assessment is critical, which, more than age, is associated with a higher rate of postoperative complications. It is worth noting that before surgery, it is recommended to lose 5–10% of body weight (among others, it has a positive effect on the results of bariatric treatment and reduces the risk of perioperative complications) . Eligibility criteria based on BMI: BMI > 40.0 kg/m 2 ; BMI 35.0–39.9 kg/m 2 in a patient with obesity complications (e.g., type 2 diabetes, hypertension, severe joint diseases, dyslipidemia, a severe form of OSA). The latest guidelines recommend surgery in patients with BMI in this range, regardless of obesity complications; BMI 30.0–35.0 kg/m 2 and uncontrolled type 2 diabetes despite appropriate pharmacological treatment . Contraindications to bariatric procedures: Mental disorders—personality disorders, severe depression; Alcoholism; Drug abuse; Eating disorders; No possibility of proper, long-term postoperative care; Poor long-term prognosis due to life-threatening diseases . The final qualification of the patient for the operation must be multidisciplinary. It is a decision of a team of specialists experienced in the treatment of obesity: a surgeon, an internist, an anesthesiologist, a psychologist or a psychiatrist, a dietitian, a physiotherapist, and, if necessary, a cardiologist, a pulmonologist, a gastroenterologist, and a neurologist . The optimal time to prepare the patient for surgery should not be shorter than three months but longer than 6–12 months . The preoperative assessment of patients scheduled for bariatric surgery should be much broader than any other major abdominal surgery. The success of surgical treatment of obesity requires a good understanding of the entire treatment process by the patient, not only the surgery itself. Therefore, the patient must be provided with information on the benefits and consequences and the risks associated with the operation. A key, unfortunately often overlooked, aspect when qualifying a patient for surgery is a psychological consultation (comprehensive assessment sometimes requires several meetings with a psychologist), which includes behavioral, nutritional, and family psychological assessment, and personality factors should be an integral part of the patient’s preoperative assessment . Working with a psychologist is one of the elements aimed at increasing the safety and effectiveness of surgical treatment by identifying specific areas to create an individually tailored treatment plan. There are at least a dozen different types of operations to choose from, more or less changing or modifying not only the anatomy but also the physiology of the digestive system, and characterized by a different number and type of long-term complications. The common feature of all surgical methods is the preferred access method—laparoscopy 2D or 3D . Detailed qualification criteria for a given type of bariatric operation go beyond the thematic framework of this study. It should be emphasized that bariatric treatment is personalized, and each element (mainly surgical procedures) should be individually selected for the patient. In-Hospital Perioperative care for patients undergoing bariatric treatment should be organized according to the Enhanced Recovery After Bariatric Surgery principles . Patients with morbid obesity have an increased risk of partial atelectasis in the distal alveoli. In the postoperative period, in the recovery room, it is recommended to administer supplemental oxygen . For patients diagnosed with obstructive sleep apnea, it is necessary to use CPAP, i.e., breathing support that prevents the collapse of the alveoli. In the postoperative period, an important aspect is appropriate respiratory rehabilitation and breathing exercises. Each patient should be mobilized on the first day after returning from the post-operative observation unit. Patients can drink fluids after returning from the recovery room . On the first postoperative day, the patient orally takes fluids (no daily volume limit) and oral nutritional support (ONS). Patients are encouraged to drink fluids while walking around the hospital ward (simultaneous activation). In the following days, gradually expand the diet. Each patient receives proton pump inhibitors (PPI) twice a day, antithrombotic prophylaxis, and non-opioid analgesia until discharge. Discharge occurs on either postoperative day 1 or 2 upon meeting specific discharge criteria: Patient tolerates oral diet and drinks at least 1000 mL of fluids per day; Does not require intravenous fluids; Postoperative pain is manageable with oral medication; The level of physical activity is similar to that before the operation; After discharge, the patient will remain under the care of third parties and, if necessary, contact with the treatment center is ensured; There were no complications that required hospitalization . Upon discharge, patients receive a follow-up visit plan for 1 year, a baseline dietary plan, and a prescription for PPI, antithrombotic prophylaxis, vitamin supplementation, and ursodeoxycholic acid (gallstone prevention) . Perioperative Monitoring for up to 30 Days The wound should be kept clean and washed every day with a disinfectant and dressing applied. Stitches should be removed 7–10 days after the surgery; for patients with diabetes, this time may be longer. It can be performed by a primary care physician or in a surgical outpatient clinic. Patients with fever, vomiting, wound discharge, abdominal pain requiring opioids, and dehydration symptoms should be referred to a surgical unit. A follow-up visit with a surgeon is necessary during the first 30 days. Every patient is prescribed low-molecule heparin injections. There are no clear standards for how long the antithrombotic prophylaxis should be administered; however, it should not be shorter than 7 days after discharge . PPI should be taken twice a day for 30 days. A longer period should be considered in patients with symptoms of gastroesophageal reflux after surgery. Rapid weight loss is associated with an increased risk of the development of gallstones. Recently, several randomized and observational studies have shown that the postoperative supply of ursodeoxycholic acid significantly reduces the risk of the development of biliary stones. The debate regarding the duration of such prophylaxis and the dose is still ongoing; however, at the moment, it is advisable to consider the use of 500 mg ursodeoxycholic acid in a divided dose for 6 months . From 2–4 weeks, the patient can take solid foods, depending on tolerance. Quick and effective activation of the gastrointestinal tract after bariatric surgery would not be possible without proper patient education in the preoperative period. In the perioperative and postoperative periods, the patient should also be consulted by a dietitian who will coordinate further nutrition of the patient. Due to the initial difficulty in taking food, patients need to have an adequate protein supply of about 1–1.5 g of protein per kilogram of ideal body weight. Initially, the most important aspect of nutrition for patients after surgery is an adequate amount of fluids to avoid dehydration. An operation with a large malabsorptive effect may require an even greater supply of protein. Each patient, during the first 30 days, should have a follow-up visit with a dietitian to assess their diet plan. Patients should resume physical activity straight after surgery. For 6 weeks, patients should not perform exercises involving the abdomen. Preferably, physical activity should be planned and supervised by a physiotherapist. Monitoring during the First Year after Surgery Patients after bariatric surgery require follow-up visits to determine weight loss, remission of obesity complications, complications associated with surgery, assessment of nutritional status, and potential qualification for revisionary surgery. The success of weight loss should be assessed as a % of excessive weight loss (%EWL). More than 50% EWL is considered a success. Obesity complications, such as type 2 diabetes and hypertension, need to be closely monitored by primary care physicians or specialists to reduce the dose of medication at a proper time. The remission should be assessed according to treatment standards for specific diseases. Female patients of childbearing age should be advised to be on medically effective contraception for 12 months following the surgery—pregnancy during the time of weight loss is not recommended. Malnutrition after bariatric surgery is common. It partly depends on the type of surgery performed (it is more common after RYGB than SG) and also on the initial nutritional status of the patient. People with obesity are usually deficient in vitamins, particularly vitamin D, B 12 , thiamine, and folic acid, as well as calcium, iron, zinc, and copper. In recent years, easily absorbable products for bariatric patients have appeared on the market, which contain all the necessary ingredients . In addition, it is recommended to periodically check the level of micro- and macro-elements in the serum. Control tests, including kidney and liver function tests, complete blood count (CBC), and serum ferritin, folic acid, vitamin B 12 , vitamin D, and calcium measurements, should be performed 3, 6, and 12 months after the procedure and then at least once a year . If this has not been performed before surgery, intact parathyroid hormone levels should be checked. Serum levels of vitamin A, vitamin E, vitamin K 1 , and PIVKA-II (a protein caused by vitamin K deficiency or antagonism) should be regularly measured at regular intervals after malabsorption procedures, such as BPD/DS, or when symptoms of deficiency occur. It is recommended to monitor serum zinc, copper, and selenium levels after sleeve gastrectomy (SG), Roux-en-Y gastric bypass (RYGB), or BPD/DS. Routine monitoring of magnesium levels is not required. The patient should be monitored by a ‘bariatric team’, which comprises of surgeon, psychologist, dietitian, and others. The optimal time for a psychologist consultation is between 6 months and 12 months after surgery; at this time, the pace of weight loss slows down, and psychological support enhances the patients’ motivation. A follow-up visit with a surgeon should be scheduled 1 year after the procedure to assess weight, comorbidities, and additional blood tests. Each patient requires a mandatory gastroscopy 1 year after surgery . Long-Term Follow-Up The gastrointestinal tract in bariatric patients’ is permanently changed; thus, they require monitoring indefinitely to determine any signs of malnutrition, macro- and micro-element deficiency, weight regain, or complications, such as gastroesophageal reflux and dumping syndrome. Patients with weight regain or onset of surgery-related complications should be referred to a bariatric surgeon to determine the need for revisional surgery. Currently, it is possible to help patients using pharmacotherapy . Pharmacotherapy should be used in patients with a lack of weight loss and as a first-line treatment in patients with weight gain. Currently, available medications are glucagon-like peptide-1 receptor agonist (GLP-1), including liraglutide (3 mg once a day s.c.) and semaglutide (1.5 mg once a week, s.c.) and combination preparation containing naltrexone and bupropion (16 mg/180 mg twice a day, orally). The choice of pharmacotherapy should be individualized. Some patients, after massive weight loss, may require plastic surgery to remove excess skin. This operation should be considered at least 12 months after the surgery with at least a 6-month period of stable weight. Perioperative care for patients undergoing bariatric treatment should be organized according to the Enhanced Recovery After Bariatric Surgery principles . Patients with morbid obesity have an increased risk of partial atelectasis in the distal alveoli. In the postoperative period, in the recovery room, it is recommended to administer supplemental oxygen . For patients diagnosed with obstructive sleep apnea, it is necessary to use CPAP, i.e., breathing support that prevents the collapse of the alveoli. In the postoperative period, an important aspect is appropriate respiratory rehabilitation and breathing exercises. Each patient should be mobilized on the first day after returning from the post-operative observation unit. Patients can drink fluids after returning from the recovery room . On the first postoperative day, the patient orally takes fluids (no daily volume limit) and oral nutritional support (ONS). Patients are encouraged to drink fluids while walking around the hospital ward (simultaneous activation). In the following days, gradually expand the diet. Each patient receives proton pump inhibitors (PPI) twice a day, antithrombotic prophylaxis, and non-opioid analgesia until discharge. Discharge occurs on either postoperative day 1 or 2 upon meeting specific discharge criteria: Patient tolerates oral diet and drinks at least 1000 mL of fluids per day; Does not require intravenous fluids; Postoperative pain is manageable with oral medication; The level of physical activity is similar to that before the operation; After discharge, the patient will remain under the care of third parties and, if necessary, contact with the treatment center is ensured; There were no complications that required hospitalization . Upon discharge, patients receive a follow-up visit plan for 1 year, a baseline dietary plan, and a prescription for PPI, antithrombotic prophylaxis, vitamin supplementation, and ursodeoxycholic acid (gallstone prevention) . The wound should be kept clean and washed every day with a disinfectant and dressing applied. Stitches should be removed 7–10 days after the surgery; for patients with diabetes, this time may be longer. It can be performed by a primary care physician or in a surgical outpatient clinic. Patients with fever, vomiting, wound discharge, abdominal pain requiring opioids, and dehydration symptoms should be referred to a surgical unit. A follow-up visit with a surgeon is necessary during the first 30 days. Every patient is prescribed low-molecule heparin injections. There are no clear standards for how long the antithrombotic prophylaxis should be administered; however, it should not be shorter than 7 days after discharge . PPI should be taken twice a day for 30 days. A longer period should be considered in patients with symptoms of gastroesophageal reflux after surgery. Rapid weight loss is associated with an increased risk of the development of gallstones. Recently, several randomized and observational studies have shown that the postoperative supply of ursodeoxycholic acid significantly reduces the risk of the development of biliary stones. The debate regarding the duration of such prophylaxis and the dose is still ongoing; however, at the moment, it is advisable to consider the use of 500 mg ursodeoxycholic acid in a divided dose for 6 months . From 2–4 weeks, the patient can take solid foods, depending on tolerance. Quick and effective activation of the gastrointestinal tract after bariatric surgery would not be possible without proper patient education in the preoperative period. In the perioperative and postoperative periods, the patient should also be consulted by a dietitian who will coordinate further nutrition of the patient. Due to the initial difficulty in taking food, patients need to have an adequate protein supply of about 1–1.5 g of protein per kilogram of ideal body weight. Initially, the most important aspect of nutrition for patients after surgery is an adequate amount of fluids to avoid dehydration. An operation with a large malabsorptive effect may require an even greater supply of protein. Each patient, during the first 30 days, should have a follow-up visit with a dietitian to assess their diet plan. Patients should resume physical activity straight after surgery. For 6 weeks, patients should not perform exercises involving the abdomen. Preferably, physical activity should be planned and supervised by a physiotherapist. Patients after bariatric surgery require follow-up visits to determine weight loss, remission of obesity complications, complications associated with surgery, assessment of nutritional status, and potential qualification for revisionary surgery. The success of weight loss should be assessed as a % of excessive weight loss (%EWL). More than 50% EWL is considered a success. Obesity complications, such as type 2 diabetes and hypertension, need to be closely monitored by primary care physicians or specialists to reduce the dose of medication at a proper time. The remission should be assessed according to treatment standards for specific diseases. Female patients of childbearing age should be advised to be on medically effective contraception for 12 months following the surgery—pregnancy during the time of weight loss is not recommended. Malnutrition after bariatric surgery is common. It partly depends on the type of surgery performed (it is more common after RYGB than SG) and also on the initial nutritional status of the patient. People with obesity are usually deficient in vitamins, particularly vitamin D, B 12 , thiamine, and folic acid, as well as calcium, iron, zinc, and copper. In recent years, easily absorbable products for bariatric patients have appeared on the market, which contain all the necessary ingredients . In addition, it is recommended to periodically check the level of micro- and macro-elements in the serum. Control tests, including kidney and liver function tests, complete blood count (CBC), and serum ferritin, folic acid, vitamin B 12 , vitamin D, and calcium measurements, should be performed 3, 6, and 12 months after the procedure and then at least once a year . If this has not been performed before surgery, intact parathyroid hormone levels should be checked. Serum levels of vitamin A, vitamin E, vitamin K 1 , and PIVKA-II (a protein caused by vitamin K deficiency or antagonism) should be regularly measured at regular intervals after malabsorption procedures, such as BPD/DS, or when symptoms of deficiency occur. It is recommended to monitor serum zinc, copper, and selenium levels after sleeve gastrectomy (SG), Roux-en-Y gastric bypass (RYGB), or BPD/DS. Routine monitoring of magnesium levels is not required. The patient should be monitored by a ‘bariatric team’, which comprises of surgeon, psychologist, dietitian, and others. The optimal time for a psychologist consultation is between 6 months and 12 months after surgery; at this time, the pace of weight loss slows down, and psychological support enhances the patients’ motivation. A follow-up visit with a surgeon should be scheduled 1 year after the procedure to assess weight, comorbidities, and additional blood tests. Each patient requires a mandatory gastroscopy 1 year after surgery . The gastrointestinal tract in bariatric patients’ is permanently changed; thus, they require monitoring indefinitely to determine any signs of malnutrition, macro- and micro-element deficiency, weight regain, or complications, such as gastroesophageal reflux and dumping syndrome. Patients with weight regain or onset of surgery-related complications should be referred to a bariatric surgeon to determine the need for revisional surgery. Currently, it is possible to help patients using pharmacotherapy . Pharmacotherapy should be used in patients with a lack of weight loss and as a first-line treatment in patients with weight gain. Currently, available medications are glucagon-like peptide-1 receptor agonist (GLP-1), including liraglutide (3 mg once a day s.c.) and semaglutide (1.5 mg once a week, s.c.) and combination preparation containing naltrexone and bupropion (16 mg/180 mg twice a day, orally). The choice of pharmacotherapy should be individualized. Some patients, after massive weight loss, may require plastic surgery to remove excess skin. This operation should be considered at least 12 months after the surgery with at least a 6-month period of stable weight. Long-Term Monitoring Beneficial prognostic factors during treatment that indicate long-term maintenance of the effects include a greater frequency of self-monitoring of energy intake and body weight, consistency between dietary choices and weight reduction goals, lower intensity of negative mood, lower intensity of hunger and emotional eating, and regular physical activity. Therefore, it is very important to focus work with the patient on these factors during therapy. The patient should be motivated to self-monitor and make appropriate food choices, which the patient should learn with the support of a dietician and physician. A very important element regarding food choices is directing the patient’s thinking in such a way that there are no dishes and products that are completely forbidden, but there are products that can be eaten in small amounts. The patient must not think that eating a certain product is a sin or a certain product is a ‘forbidden fruit’ because this may become the reason for obsessive thinking about it. Often because of such thinking, the end of the active phase of treatment, i.e., achieving the therapeutic goal for the patient, is the limit beyond which they will be able to eat these forbidden products . Other very important elements that positively affect long-term weight loss maintenance are regular meals, eating breakfast, and choosing food with a low content of fats and simple sugars. Long-term weight maintenance is also associated with the implementation of regular physical activity accepted by the patients, and the time of it should be longer than in the phase of active treatment, as lower weight is related to decreased energy expenditure during physical activity . There is also a need to effectively treat negative emotions, intensification of hunger, and emotional eating. Both pharmacotherapy and psychotherapy should be used, and, if necessary, both of these therapeutic methods should also be used in the phase of maintaining the achieved treatment effects . Numerous data indicate that the psychological assessment of the patient’s personality is prognostic and helps to select those patients who may have problems with maintaining the therapeutic effects and who should be subject to greater supervision at this stage. A stable personality with a higher level of self-control and greater emotional maturity, as well as personality traits such as creativity, autonomy, and self-sufficiency, and an internal locus of control with a sense of self-efficacy and better self-esteem, are favorable prognostic factors. Whereas dysfunctions in social interactions, higher levels of anxiety, and avoidance of monotony are unfavorable prognostic factors . Maintaining the achieved effects is easier in patients with a more stable living situation and fewer stressful life events and receiving social support. It should be remembered that patients with more problems related to eating behaviors, life situations, and worse functioning, as well as less social support, require more intensive supervision and support and the search for individual solutions in the maintenance phase . It is also very important to teach the patient that if the disease recurs, they should immediately contact the doctor and apply the proposed treatment methods. Numerous studies have shown that prolonged therapeutic interventions and continuous professional support in the maintenance phase significantly improve long-term treatment outcomes . Beneficial prognostic factors during treatment that indicate long-term maintenance of the effects include a greater frequency of self-monitoring of energy intake and body weight, consistency between dietary choices and weight reduction goals, lower intensity of negative mood, lower intensity of hunger and emotional eating, and regular physical activity. Therefore, it is very important to focus work with the patient on these factors during therapy. The patient should be motivated to self-monitor and make appropriate food choices, which the patient should learn with the support of a dietician and physician. A very important element regarding food choices is directing the patient’s thinking in such a way that there are no dishes and products that are completely forbidden, but there are products that can be eaten in small amounts. The patient must not think that eating a certain product is a sin or a certain product is a ‘forbidden fruit’ because this may become the reason for obsessive thinking about it. Often because of such thinking, the end of the active phase of treatment, i.e., achieving the therapeutic goal for the patient, is the limit beyond which they will be able to eat these forbidden products . Other very important elements that positively affect long-term weight loss maintenance are regular meals, eating breakfast, and choosing food with a low content of fats and simple sugars. Long-term weight maintenance is also associated with the implementation of regular physical activity accepted by the patients, and the time of it should be longer than in the phase of active treatment, as lower weight is related to decreased energy expenditure during physical activity . There is also a need to effectively treat negative emotions, intensification of hunger, and emotional eating. Both pharmacotherapy and psychotherapy should be used, and, if necessary, both of these therapeutic methods should also be used in the phase of maintaining the achieved treatment effects . Numerous data indicate that the psychological assessment of the patient’s personality is prognostic and helps to select those patients who may have problems with maintaining the therapeutic effects and who should be subject to greater supervision at this stage. A stable personality with a higher level of self-control and greater emotional maturity, as well as personality traits such as creativity, autonomy, and self-sufficiency, and an internal locus of control with a sense of self-efficacy and better self-esteem, are favorable prognostic factors. Whereas dysfunctions in social interactions, higher levels of anxiety, and avoidance of monotony are unfavorable prognostic factors . Maintaining the achieved effects is easier in patients with a more stable living situation and fewer stressful life events and receiving social support. It should be remembered that patients with more problems related to eating behaviors, life situations, and worse functioning, as well as less social support, require more intensive supervision and support and the search for individual solutions in the maintenance phase . It is also very important to teach the patient that if the disease recurs, they should immediately contact the doctor and apply the proposed treatment methods. Numerous studies have shown that prolonged therapeutic interventions and continuous professional support in the maintenance phase significantly improve long-term treatment outcomes . The importance of primary care in the management of an overweight or obese patient cannot be overestimated. Family doctors who take care of their patients on a continuous basis, comprehensively assessing their health problems, have a special opportunity to properly prevent, diagnose, and treat people with overweight and obesity. summarizes the tasks of a family doctor in this area. Due to the fact that primary health care is subject to numerous, varied duties and tasks, the treatment of patients with obesity in the practice of a family doctor may encounter various barriers and difficulties . The basic barrier on the doctor’s side is the lack of knowledge about obesity. Some doctors treat obesity only as a risk factor and not as a complex disease that is the starting point for many organ complications and other diseases. Moreover, doctors have insufficient knowledge about the causes of obesity, its diagnosis, and treatment. Due to insufficient knowledge, but also a misconception, some doctors believe that obesity is the result of a lack of discipline in nutrition, and therefore it is not possible to obtain positive treatment effects. The key barrier for most family doctors is certainly the time that the doctor has for the patient. On average, 10 min is allocated for consultation in Polish primary health care. At this time, it is necessary to collect the history, conduct a physical examination, make recommendations, and fill up the patient’s record. The amount of work and short time for consultations discourage doctors from making efforts in the prevention and treatment of obesity in the context of the belief that such activities are of little effectiveness. In addition, there is an inadequate organization of work, in which the role of nurses is omitted, especially at the stage of overweight and obesity diagnosis in relation to anthropometric measurements, but also nurses or other professional personnel in the field of education in the area of obesity prevention and treatment. In the effective treatment of a patient with obesity, the patient’s attitude towards this disease and trust in the attending physician is very important. Recently in the mass media, television, and the Internet, the phenomenon of body shaming has been observed. As a result, patients with obesity accept this condition and deny the need for treatment. In addition, a large proportion of patients, as well as some doctors, do not believe in the success and effectiveness of obesity treatment. This reduces the motivation for obesity treatment. This is a serious barrier, and we should only treat patients who consent to it and are sufficiently convinced that it makes sense. On the other hand, in patients already treated, we may encounter serious difficulties in obtaining good results due to family habits and undiagnosed emotional eating or eating disorders. In everyday clinical practice, we observe the occurrence of obesity from generation to generation. It should be emphasized that it is extremely rare that this has a genetic basis. Most often, it is the result of established eating habits and family dysfunction leading to the development of eating disorders. Difficulties in changing eating habits may also be the nature of the work performed (e.g., working as a truck driver or shift work). Excessive eating is also a consequence of the way societies function in the modern world, dominated by haste, stress, competition, irregular, excessive working hours, and the lack of or inactive rest resulting in dealing with negative emotions with food. We are also dealing with unpredictable situations, such as a pandemic that limits people’s activity and worsens their mental state, which contributes to the growing obesity incidence. Both individual and environmental factors are difficult to overcome obstacles in the treatment process. There are also economic reasons. Modern drugs that can be used in the treatment of obesity are expensive, and for the time being, fewer patients can benefit from them. Finally, family physicians also encounter some significant barriers at the system level. Due to the complexity of obesity, proper management of it requires a team of professionals, including a dietician, a psychotherapist, and a physiotherapist. It is equally important that in a situation when a family doctor exhausts his possibilities at the level of primary health care, they cannot refer a patient with obesity to a center that will take care of them comprehensively. Therefore, family physicians usually refer patients with obesity to an endocrinologist, surgeon, or other specialists, depending on the occurring complications. We are able to overcome some of the above-mentioned barriers. First of all, as doctors, we can broaden our knowledge about diagnostics and treatment of obesity and use the available guidelines on this topic. We are also able to change our attitude and approach to obesity treatment, improve communication with patients, motivate patients to start treatment and persevere in changing their lifestyles. However, it is a lengthy process, requiring consistency and commitment on the part of the doctor, but also patients’ cooperation. Organizational changes can help us with this. One can use the organizational scheme of conduct—different in the case of newly enrolled patients to the practice and different in the case of patients already under our care, which is illustrated in . The key seems to be the obligation imposed on primary care physicians by the National Health in the year of anthropometric measurements in all patients. This allows the diagnosis of excessive body weight, which is the starting point for further action . The second important barrier to overcome is the time of medical consultation. Due to the short time that a family doctor can devote to a patient during one visit, it is beneficial to schedule several short visits in succession, during which the necessary activities related to the diagnosis and treatment of obesity are performed. A randomized study has shown that even a very short, 30 s medical consultation in a primary care clinic and referral of the patient to participate in a local weight loss program results in greater weight loss assessed after 12 months . Certain opportunities in overcoming the above-mentioned barriers also open with the introduction of coordinated care in primary health care in relation to cardiological, endocrinological, pulmonary diseases, and diabetes. In coordinated care, family doctors receive dietary and educational advice as well as greater diagnostic possibilities, which should result in better care for patients with obesity. 7.1. Recommendations for General Practitioners (GPs) Screening for overweight and obesity, including weight measurement and BMI calculation, should be performed on all adult patients reporting to their GP once a year; The measurements of body weight, height, and waist circumference should be an integral part of physical examination and should be recorded in the medical history. The measurements should be performed during the following: The first visit of patients in GP (at the latest during two consecutive visits); Patient visit due to overweight and obesity; If possible, at each visit the reason for which are complications of obesity, including hypertension, type 2 diabetes, dyslipidemia, coronary artery disease, osteoarthritis, and other comorbidities related to obesity; At each routine visit, if a doctor suspects a patient is overweight or obese; In all patients with normal BMI values (18.5–24.9 kg/m 2 ), waist circumference should be measured to assess metabolic risk; In all patients with BMI < 35 kg/m 2 , waist circumference should be measured to assess visceral obesity occurrence; In all patients with overweight and obesity, anamnesis should be taken in the direction of complications, and diagnostics should be performed in their direction. Such activities should be carried out systematically; Diagnostics for overweight and obesity should be performed in all patients treated for their complications; All patients with overweight and obesity should be screened for emotional eating, eating disorders (binge eating syndrome and night eating syndrome), as well as depression and anxiety (HADS); A patient with obesity should be treated with respect, and his/her illness should not be a source of shame and self-blame; After making a diagnosis of overweight or obesity, the doctor should explain to the patient the essence of the disease and its consequences and assess his/her readiness to change and the primary cause of the development of obesity; A physician should use appropriate medical vocabulary in relation to an obese patient, show empathy towards him/her and give advice appropriate to his/her situation, as well as implement all possible therapeutic procedures, including pharmacotherapy and psychotherapy and, if indicated, also surgical treatment. The patient must agree to the proposed treatment methods and accept them; The principle of person-centered care should be the norm in the approach to patients with obesity; During treatment, a schedule of follow-up visits should be set, and the patient should be informed about what will be checked during them. If necessary, expand the methods of implemented treatment and support the patient in the event of difficulties; Remember that a patient with obesity may be aware of their disease, but they do not talk about it because they are ashamed, and the doctor must be able to talk about it; It is unethical not to recognize and not treat obesity instead or refer the patient to another doctor who will treat it. 7.2. Recommendations for National Authorities The lack of designated support for health care of patients with obesity in Poland necessitates the creation of a system based on an obesitologist, a dietician, a psychologist, and a physiotherapist to support general practitioners referring patients with difficulty in diagnosis and management. The already existing shortage of time in primary care precludes taking additional demanding duties, regardless of the transfer of funds for the creation of jobs for other members of the therapeutic team (a dietitian and a psychologist). In order to implement professional treatment of obesity, it is necessary to establish a subspecialty in obesitology, the program of which could be implemented by doctors specializing in internal medicine, family medicine, pediatrics, and general surgery. On the basis of academic centers, regional multi-specialty centers with a full range of care, including the possibility of surgical treatment and pre- and post-operative care (third-level referral centers), should be established. Specialist centers should be established at least voivodeship cities level to provide a full range of conservative treatment (II degree of reference), and selected practices of a family doctor with at least one obesitology specialist in the team should constitute the I degree of reference. Screening for overweight and obesity, including weight measurement and BMI calculation, should be performed on all adult patients reporting to their GP once a year; The measurements of body weight, height, and waist circumference should be an integral part of physical examination and should be recorded in the medical history. The measurements should be performed during the following: The first visit of patients in GP (at the latest during two consecutive visits); Patient visit due to overweight and obesity; If possible, at each visit the reason for which are complications of obesity, including hypertension, type 2 diabetes, dyslipidemia, coronary artery disease, osteoarthritis, and other comorbidities related to obesity; At each routine visit, if a doctor suspects a patient is overweight or obese; In all patients with normal BMI values (18.5–24.9 kg/m 2 ), waist circumference should be measured to assess metabolic risk; In all patients with BMI < 35 kg/m 2 , waist circumference should be measured to assess visceral obesity occurrence; In all patients with overweight and obesity, anamnesis should be taken in the direction of complications, and diagnostics should be performed in their direction. Such activities should be carried out systematically; Diagnostics for overweight and obesity should be performed in all patients treated for their complications; All patients with overweight and obesity should be screened for emotional eating, eating disorders (binge eating syndrome and night eating syndrome), as well as depression and anxiety (HADS); A patient with obesity should be treated with respect, and his/her illness should not be a source of shame and self-blame; After making a diagnosis of overweight or obesity, the doctor should explain to the patient the essence of the disease and its consequences and assess his/her readiness to change and the primary cause of the development of obesity; A physician should use appropriate medical vocabulary in relation to an obese patient, show empathy towards him/her and give advice appropriate to his/her situation, as well as implement all possible therapeutic procedures, including pharmacotherapy and psychotherapy and, if indicated, also surgical treatment. The patient must agree to the proposed treatment methods and accept them; The principle of person-centered care should be the norm in the approach to patients with obesity; During treatment, a schedule of follow-up visits should be set, and the patient should be informed about what will be checked during them. If necessary, expand the methods of implemented treatment and support the patient in the event of difficulties; Remember that a patient with obesity may be aware of their disease, but they do not talk about it because they are ashamed, and the doctor must be able to talk about it; It is unethical not to recognize and not treat obesity instead or refer the patient to another doctor who will treat it. The lack of designated support for health care of patients with obesity in Poland necessitates the creation of a system based on an obesitologist, a dietician, a psychologist, and a physiotherapist to support general practitioners referring patients with difficulty in diagnosis and management. The already existing shortage of time in primary care precludes taking additional demanding duties, regardless of the transfer of funds for the creation of jobs for other members of the therapeutic team (a dietitian and a psychologist). In order to implement professional treatment of obesity, it is necessary to establish a subspecialty in obesitology, the program of which could be implemented by doctors specializing in internal medicine, family medicine, pediatrics, and general surgery. On the basis of academic centers, regional multi-specialty centers with a full range of care, including the possibility of surgical treatment and pre- and post-operative care (third-level referral centers), should be established. Specialist centers should be established at least voivodeship cities level to provide a full range of conservative treatment (II degree of reference), and selected practices of a family doctor with at least one obesitology specialist in the team should constitute the I degree of reference.
|
Cannabinoid consumption among cancer patients receiving systemic anti-cancer treatment in the Netherlands
|
543eaff9-cead-4d81-a9ea-e2f0bd2087f8
|
10097765
|
Internal Medicine[mh]
|
As a consequence of the globally increased public interest in the medical use of cannabinoids, these substances gain extensive amount of attention in the media (Bridgeman and Abazia ). Through these media, their alleged beneficial effects for a variety of diseases are widely propagated (Shi et al. ). In line with the increased interest in these products, over-the-counter sale of freely accessible cannabis-derived products, such as cannabinoid oils, has increased (McGregor ). Additionally, statistics from the Netherlands show an increase in medical prescription of both concentrated cannabis-derived oils and herbal cannabis (de Hoop ). The numerous anecdotal success stories about its potential analgesic and anti-emetic effects, could make cannabinoids appealing for a variety of patient populations, among which oncology patients, who often present with pain and nausea (Blake et al. ; Bouquié et al. ; Pearce et al. ). The great adverse impact of cancer-related symptoms on quality of life, the potential intolerability to standard treatment or irresponsiveness to traditional analgesic therapy, could contribute to the susceptibility of cancer patients more to rely on alternative therapeutic options that are announced in the media, such as cannabinoids (Blake et al. ), especially when these products are freely accessible. Even though anecdotal evidence on the analgesic properties of cannabinoid substances is abundant, high-quality clinical trials validating these effects in cancer patients, are lacking (Whiting et al. ). Some evidence exists suggesting that oromucosal application of cannabinoid extracts beneficially impacts pain in advanced cancer patients (Johnson et al. ); Noyes et al. ). However, these studies often present with limited statistical power (Blake et al. ), similar to the review of Wang et al. ( ), who reported a minimal effect of both medical cannabis extracts and cannabinoid oil on pain relief in cancer- and non-cancer patients (Wang et al. ). Additional evidence shows no significant change in pain relief through medical cannabis extracts among cancer patients, compared to standard pain medication, thereby questioning the relevance of introduction of cannabinoids into clinical practice for pain management (Campbell et al. ). The same holds for the proposed anti-emetic effects of cannabinoids. Several small studies demonstrate a superior efficacy of medical cannabis extracts over placebo on prevention and treatment of chemotherapy-induced nausea and vomiting (CINV) (Grimison et al. ; Kramer ; National Academies of Sciences Engineering and Medicine ), but limited RCTs are available, the power of the studies is questionable and outdated anti-emetics are used as a control (Chow et al. ; Mersiades et al. ; National Academies of Sciences Engineering and Medicine, ). Despite the abundance of systematic reviews on the effectiveness of cannabinoids, the strength of their conclusions are limited due to scarce high-quality underlying research (Pratt et al. ). Hence, evidence supporting the proposed effects of cannabinoids among cancer patients, although promising, is minute. In addition to the lack of proof for beneficial effects of cannabinoid use among oncology patients, many uncertainties persist regarding interactions between anti-cancer agents and cannabinoids, which raises concerns about the safety of cannabinoids in patients with cancer undergoing systemic treatment (Bouquié et al. ). Recent observational studies show a significant association between the use of cannabinoids during immunotherapy and worse overall survival, potentially due to interference of the anti-inflammatory effect of cannabinoids with responsiveness to immune checkpoint inhibitors (Biedny et al. ). Furthermore, time to progression of the tumor (TTP) is suggested to be shorter. Despite the lack of prospective evidence of the effect of cannabinoids during immunotherapy, these observational data suggest that exposure to cannabinoid substances during immunotherapy should be approached carefully (Bar-Sela et al. ). Notwithstanding the ambiguity concerning the effectiveness and safety of cannabinoid usage in oncology patients, data from abroad suggest usage of cannabinoid substances among this patient group to be significant (Donovan et al. , ). Although the degree of consumption was widely spread over different studies and different countries (Martell et al. ; Rajasekhara et al. ; Waissengrin et al. ) prevalences up to 25% were reported (Pergam et al. ). This included both patients under best supportive care as well as on active treatment, the latter independently being proposed as a predictive factor for consumption (Martell et al. ). Reported reasons for use varied and included mainly physical cancer-related symptoms (Pergam et al. ), primarily cancer-related nausea and pain. Nevertheless, several studies showed that about one third of the cannabinoid users, consume cannabinoid substances for presumed curative purposes (Mousa et al. ; Rajasekhara et al. ). These are disturbing findings, since no clinical trials to date are supporting cannabinoid use for this purpose (Mousa et al. ). Such information on cannabinoid use among Dutch oncology patients, however, is not available and results derived from other countries cannot directly translate to the Netherlands, due to different legal status of cannabinoids (Donovan et al. ). Hence, not much is known about cannabinoid use among oncology patients in the Netherlands, nor about characteristics of its users, patterns of consumption, reasons for consumption, and potential perceived effects. The rapid growth in public interest in cannabinoid usage, its increasing social acceptance and private accessibility, the significant use of cannabinoids for medical purposes among oncology patients in other countries, and yet simultaneous inconclusiveness of the available knowledge regarding its health effects (Whitcomb et al. ), indicate a need to understand the utility of cannabinoids among cancer patients undergoing systemic treatment in the Netherlands (Pergam et al. ). This is especially important in light of the lack of proven clinical evidence for its efficacy and concurrent potential curative believes among patients, while potential risks concerning interference with particularly immunotherapy, cannot be excluded. Gaining insight in the cannabinoid use among oncology patients undergoing systemic treatment in the Netherlands, will increase awareness among doctors and help identifying those patients who are most likely to use cannabinoids. This supports doctors in managing patient expectations for use, through increasing the understanding of potential risks, benefits and uncertainties (Donovan et al. , ). Therefore, this research is aimed at gaining insight in cannabinoid usage among oncology patients receiving systemic therapy in the Netherlands. It aims to identify the demographic and clinical characteristics of patients, associated with cannabinoid consumption, in addition to determinants of use and perceived effects. Thereby, this study makes a novel contribution to the existing literature on cannabinoid usage among cancer patients.
Setting This study is conducted at the Maastricht University Medical Centre (MUMC +). This study was exempted from the Human Subjects Act by the medical-ethical evaluation board of the academic hospital Maastricht and Maastricht University. Data was collected over a 10-week time period, spanning from December 2020 until February 2021. Participants The study aimed at including 150 patients. Patients were eligible for the study, if they were at least 18 years of age and received intravenous systemic treatment at the outpatient facility of Maastricht University Medical Centre (MUMC +) for any type of solid cancer. Patients treated with curative as well as patients treated with palliative intent were applicable for inclusion in the study. Intravenous systemic therapy could be the solitary treatment for the malignancy as well as being administered as a (neo) adjuvant treatment. Exclusion criteria were the incapability of speaking Dutch and not being capable to independently answer questions from the survey. Study procedure All patients fulfilling study criteria were approached at our outpatient facility. After written informed consent, data were collected by means of a survey. Recruitment of patients occurred parallel to data collection. Data collection The survey contained questions concerning clinical features and sociodemographic variables (sex, age, education, ethnicity, smoking history, comorbidities, current medication, type of cancer, anti-cancer treatment and intention of treatment) and details on cannabinoid usage (current or past usage, the intention of usage, characteristics of usage). Based on this, patients were allocated to either of the 5 categories, (1) never used cannabinoids, (2) recreative use of cannabinoids in the past, (3) medical use of cannabinoids in the past (4) current recreative use of cannabinoids, and (5) current medical use of cannabinoids. Patients of category 1 and 2, were asked for the likelihood of future medical usage of cannabinoids, motivation for future medical usage and characteristics of potential future use in terms of the supposed product, dosage and frequency of consumption. Patients of category 3, were asked for reasons for both starting and stopping the use of cannabinoids for medical purposes and their perceived effects, in addition to characteristics of consumption. Patients currently using cannabinoids for recreative purposes (group 4) were asked about current frequency and duration of usage, and potential effects on symptoms related to their cancer. Patients currently using cannabinoids for medical purposes (group 5), were asked about the consumed product, current frequency and duration of usage, motivation of usage, potential curative believes, and their perceived effects. Data analysis The rate of cannabinoid use was determined for the different usage groups. Per group, demographic variables, utility characteristics, potential motivational aspects and potential perceived effects were analyzed through descriptive statistics, resulting in percentages and absolute values. Multivariate logistic regression on age, educational level, smoking history, type of cancer, intention of treatment and type of treatment, was applied to identify predictive factors for consumption. A p -value of < 0.05 indicated significance.
This study is conducted at the Maastricht University Medical Centre (MUMC +). This study was exempted from the Human Subjects Act by the medical-ethical evaluation board of the academic hospital Maastricht and Maastricht University. Data was collected over a 10-week time period, spanning from December 2020 until February 2021.
The study aimed at including 150 patients. Patients were eligible for the study, if they were at least 18 years of age and received intravenous systemic treatment at the outpatient facility of Maastricht University Medical Centre (MUMC +) for any type of solid cancer. Patients treated with curative as well as patients treated with palliative intent were applicable for inclusion in the study. Intravenous systemic therapy could be the solitary treatment for the malignancy as well as being administered as a (neo) adjuvant treatment. Exclusion criteria were the incapability of speaking Dutch and not being capable to independently answer questions from the survey.
All patients fulfilling study criteria were approached at our outpatient facility. After written informed consent, data were collected by means of a survey. Recruitment of patients occurred parallel to data collection.
The survey contained questions concerning clinical features and sociodemographic variables (sex, age, education, ethnicity, smoking history, comorbidities, current medication, type of cancer, anti-cancer treatment and intention of treatment) and details on cannabinoid usage (current or past usage, the intention of usage, characteristics of usage). Based on this, patients were allocated to either of the 5 categories, (1) never used cannabinoids, (2) recreative use of cannabinoids in the past, (3) medical use of cannabinoids in the past (4) current recreative use of cannabinoids, and (5) current medical use of cannabinoids. Patients of category 1 and 2, were asked for the likelihood of future medical usage of cannabinoids, motivation for future medical usage and characteristics of potential future use in terms of the supposed product, dosage and frequency of consumption. Patients of category 3, were asked for reasons for both starting and stopping the use of cannabinoids for medical purposes and their perceived effects, in addition to characteristics of consumption. Patients currently using cannabinoids for recreative purposes (group 4) were asked about current frequency and duration of usage, and potential effects on symptoms related to their cancer. Patients currently using cannabinoids for medical purposes (group 5), were asked about the consumed product, current frequency and duration of usage, motivation of usage, potential curative believes, and their perceived effects.
The rate of cannabinoid use was determined for the different usage groups. Per group, demographic variables, utility characteristics, potential motivational aspects and potential perceived effects were analyzed through descriptive statistics, resulting in percentages and absolute values. Multivariate logistic regression on age, educational level, smoking history, type of cancer, intention of treatment and type of treatment, was applied to identify predictive factors for consumption. A p -value of < 0.05 indicated significance.
A total of 153 patients signed informed consent. One person withdrew informed consent for non-specified reasons. Therefore, 152 patients are included in the analysis. Demographics of the participant The mean age of the participants was 63.3 ± 10.4 years (SD). Men and women were evenly represented in the study population ( n = 80; n = 72). The vast majority of the participants had a Dutch ethnicity ( n = 133, 91.1%). About 38% of the participants reported having at least a college degree ( n = 57) (Table ). Out of the 152 participants, 65.0% was treated with palliative intent. Over 40% ( n = 65) of the participants received immunotherapy. Different types of cancers were included. Lung tumors were the most prevalent (Table ). Prevalence of cannabinoid usage In the current study population, 15% ( n = 23) of the participants reported current use of any type of cannabinoid for medical purposes. Three of the current users, also reported previous consumption of cannabinoids, unrelated to the current episode of consumption. In 48% of the patients using cannabinoids for medical purposes, it was reported by the clinician in the patient files. In total, 23.0% of the participants were known with the use of cannabinoids for medical purposes, either currently or in the past. Additionally, 15.8% of the participants ( n = 24) reported previous use of cannabinoid substances for recreational purposes, while 3 participants reported current recreative use of cannabinoids (2.0%). Among the participants who never used cannabinoids for medical purposes in the past, nor currently using it with this intent, 22.5% considered future usage for medical reasons. Characteristics of users Cannabinoid consumption was equally divided over gender (male n = 17, female n = 18). The mean age of the users was 61.2 ± 9.3 years, which is comparable to the age of the complete study population. Of the current or previous users of cannabinoids for medical purposes, 80% were treated with palliative intent ( n = 28). Multivariate analysis showed intention of treatment to be a predictive factor for consumption ( p = 0.02, OR = 0.334). Out of the current or former users of cannabinoids for medical purposes, 19 participants received treatment containing immunotherapy (54%), whereas 40% of the participants received treatment containing chemotherapy ( n = 14). Only 2 patients received solely targeted therapy (6%) (Table ). Type of treatment was shown to be a predictive factor for consumption ( p = 0.026, OR = 1.564). Current or former users reported a smoking history of 32.5 ± 27.3 packyears compared to 23.8 ± 27.1 packyears in the general study population (Table , p = 0.3). Features of utility CBD-oil was used by 18 patients (51.4%), whereas combined CBD/THC oil was used by 10 patients (29%). Only 2 patients reported smoking as the most favorable way of consuming cannabinoids (Fig. ). Among current users, consumption of CBD-oil and combined CBD/THC oil, was equally divided ( n = 10, n = 11). Of the current users, 78% were consuming cannabinoids daily. Most of the users (52%) reported a consumption frequency of once a day, while 26% of the current users reported a consumption frequency of multiple times a day. The former users reported mainly consumption of CBD-oil ( n = 9, 60.0%). Only 2 patients reported trying more than 1 type of cannabinoid, whereas the majority of the patients who previously used cannabinoids with medical intent, tried only one type of consumption (Fig. ). Most of the patients who are currently using cannabinoids retrieve the substance from friends or family ( n = 7, 30.4%). Only one of the participants reported to get the cannabinoid substances prescribed by the doctor. Of the participants who have not been in contact with cannabinoids for medical purposes, and who considered starting it with these reasons, 48% ( n = 13) reported their most favorable source of cannabinoids to be the prescription by the doctor, whereas uncontrolled sources were less preferred. Motivation for consumption Almost half of all users, including the current and previous users, reported the intent to treat or cure cancer, as a motivation for their consumption ( n = 16, 45.7%). Among current users, this percentage was 52.2% (Fig. a), whereas among previous users, this percentage was 26.7% (Appendix 1, Fig. 3). The majority of the patients started the use of cannabinoid substances after their diagnosis. Regarding physical symptoms, the most common symptom for consumption in former users was pain (Appendix 1, Fig. 3). Two of these patients reported a pain rating score equal or higher than 5 on the Numeric Pain Rating Scale (NRS) during time of assessment, equal to current users. Among current users, sleeping problems were the most reported symptom as reason for cannabinoid consumption (Fig. b; Appendix 1, Fig. 4). Psychological complaints were more often reported as a motivation among current users, compared to past users (Appendix 1, Figs. 3, 4). Of the non-users who considered potential future usage for medical reasons ( n = 27), 74.1% reported considering using cannabinoids for pain, while 25.9% of them considered future usage for anti-cancer purposes. Only four patients (15%) reported potential future usage of cannabinoids for psychological purposes. Perceived effects The general effect score, rated from 1 (no effect) till 4 (great effect), averaged over all physical and psychological symptoms, was 3.1. The participants who used cannabinoids for medical purposes in the past, reported an average effect score of 1.6. In line with this, over 47% ( n = 7) of the previous users reported lack of effect as the reason to stop using cannabinoids. Other reasons for stopping the consumption of cannabinoids were, among others, side effects ( n = 3), costs of the cannabinoids ( n = 1) and advise of the doctor ( n = 1). On the contrary, only one of the recreative users reported effectiveness of smoking cannabis on nausea and stress levels.
The mean age of the participants was 63.3 ± 10.4 years (SD). Men and women were evenly represented in the study population ( n = 80; n = 72). The vast majority of the participants had a Dutch ethnicity ( n = 133, 91.1%). About 38% of the participants reported having at least a college degree ( n = 57) (Table ). Out of the 152 participants, 65.0% was treated with palliative intent. Over 40% ( n = 65) of the participants received immunotherapy. Different types of cancers were included. Lung tumors were the most prevalent (Table ).
In the current study population, 15% ( n = 23) of the participants reported current use of any type of cannabinoid for medical purposes. Three of the current users, also reported previous consumption of cannabinoids, unrelated to the current episode of consumption. In 48% of the patients using cannabinoids for medical purposes, it was reported by the clinician in the patient files. In total, 23.0% of the participants were known with the use of cannabinoids for medical purposes, either currently or in the past. Additionally, 15.8% of the participants ( n = 24) reported previous use of cannabinoid substances for recreational purposes, while 3 participants reported current recreative use of cannabinoids (2.0%). Among the participants who never used cannabinoids for medical purposes in the past, nor currently using it with this intent, 22.5% considered future usage for medical reasons.
Cannabinoid consumption was equally divided over gender (male n = 17, female n = 18). The mean age of the users was 61.2 ± 9.3 years, which is comparable to the age of the complete study population. Of the current or previous users of cannabinoids for medical purposes, 80% were treated with palliative intent ( n = 28). Multivariate analysis showed intention of treatment to be a predictive factor for consumption ( p = 0.02, OR = 0.334). Out of the current or former users of cannabinoids for medical purposes, 19 participants received treatment containing immunotherapy (54%), whereas 40% of the participants received treatment containing chemotherapy ( n = 14). Only 2 patients received solely targeted therapy (6%) (Table ). Type of treatment was shown to be a predictive factor for consumption ( p = 0.026, OR = 1.564). Current or former users reported a smoking history of 32.5 ± 27.3 packyears compared to 23.8 ± 27.1 packyears in the general study population (Table , p = 0.3).
CBD-oil was used by 18 patients (51.4%), whereas combined CBD/THC oil was used by 10 patients (29%). Only 2 patients reported smoking as the most favorable way of consuming cannabinoids (Fig. ). Among current users, consumption of CBD-oil and combined CBD/THC oil, was equally divided ( n = 10, n = 11). Of the current users, 78% were consuming cannabinoids daily. Most of the users (52%) reported a consumption frequency of once a day, while 26% of the current users reported a consumption frequency of multiple times a day. The former users reported mainly consumption of CBD-oil ( n = 9, 60.0%). Only 2 patients reported trying more than 1 type of cannabinoid, whereas the majority of the patients who previously used cannabinoids with medical intent, tried only one type of consumption (Fig. ). Most of the patients who are currently using cannabinoids retrieve the substance from friends or family ( n = 7, 30.4%). Only one of the participants reported to get the cannabinoid substances prescribed by the doctor. Of the participants who have not been in contact with cannabinoids for medical purposes, and who considered starting it with these reasons, 48% ( n = 13) reported their most favorable source of cannabinoids to be the prescription by the doctor, whereas uncontrolled sources were less preferred.
Almost half of all users, including the current and previous users, reported the intent to treat or cure cancer, as a motivation for their consumption ( n = 16, 45.7%). Among current users, this percentage was 52.2% (Fig. a), whereas among previous users, this percentage was 26.7% (Appendix 1, Fig. 3). The majority of the patients started the use of cannabinoid substances after their diagnosis. Regarding physical symptoms, the most common symptom for consumption in former users was pain (Appendix 1, Fig. 3). Two of these patients reported a pain rating score equal or higher than 5 on the Numeric Pain Rating Scale (NRS) during time of assessment, equal to current users. Among current users, sleeping problems were the most reported symptom as reason for cannabinoid consumption (Fig. b; Appendix 1, Fig. 4). Psychological complaints were more often reported as a motivation among current users, compared to past users (Appendix 1, Figs. 3, 4). Of the non-users who considered potential future usage for medical reasons ( n = 27), 74.1% reported considering using cannabinoids for pain, while 25.9% of them considered future usage for anti-cancer purposes. Only four patients (15%) reported potential future usage of cannabinoids for psychological purposes.
The general effect score, rated from 1 (no effect) till 4 (great effect), averaged over all physical and psychological symptoms, was 3.1. The participants who used cannabinoids for medical purposes in the past, reported an average effect score of 1.6. In line with this, over 47% ( n = 7) of the previous users reported lack of effect as the reason to stop using cannabinoids. Other reasons for stopping the consumption of cannabinoids were, among others, side effects ( n = 3), costs of the cannabinoids ( n = 1) and advise of the doctor ( n = 1). On the contrary, only one of the recreative users reported effectiveness of smoking cannabis on nausea and stress levels.
This research aimed at gaining knowledge on the usage of cannabinoid substances among cancer patients receiving systemic anti-cancer treatment in the Netherlands. The current study revealed that almost a quarter of the cancer patients have used cannabinoid substances for medical purposes, with a prevalence of active users of 15%. Potential future medical usage by current non-users was reported to be 23%. CBD-oil was the most frequently reported way of consumption. Users consisted mainly of patients undergoing treatment with palliative intent. More than half of the patients reporting use of cannabinoid substances, received immunotherapy treatment. Intention of treatment, as well as type of therapy, turned out to be predictive factors for cannabinoid consumption. Although motivations for usage varied, about half of the users, reported the supposed anti-cancer properties of cannabinoids as their motivation to engage in consumption of these substances. The high prevalence of self-reported use of cannabinoid substances among cancer patients in the Netherlands, is in line with the findings from several other studies, suggesting that cannabinoid usage among this patient group should not be underestimated. A Canadian study, performed prior to the Canadian Cannabis Act, reported a similar prevalence of active users of 18% (Martell et al. ). We expected the cannabinoid usage in our study to be higher than reported in the Canadian study, for several reasons. First of all, the current study is the first study selecting patients receiving systemic therapy, which was a predictor for cannabinoid consumption in the Canadian study (Martell et al. ). Furthermore, in Canada consumption of cannabis for medical purposes was an exemption under the law prior to Canadian Cannabis Act in 2018 (Martell et al. ), while in the Netherlands both recreational as well as medical use of cannabis are tolerated though not legalized (Gielen and de Vrey ). This would justify higher prevalence of users in the current study compared to the Canadian study. Pergam et al. ( ), who performed a similar study in Washington, where cannabis has been fully legalized, revealed a prevalence of active users of 24% (Pergam et al. ). Although cultural differences between the USA and the Netherlands could affect the prevalence of consumption, a more likely explanation is the methodology of both studies. In the Pergam study a considerable percentage of users reported exclusive recreational use, whereas our study allowed for specifying consumption based on medical intent. The innovative aspect of the current study, including the selection of patients receiving systemic therapy combined with the potential exclusive focus on consumption of cannabinoid substances with medical intent, contributes to the differences in prevalence found between the studies and allows for a conscientious appraisal of medical cannabinoid consumption among cancer patients in the Netherlands receiving systemic therapy, which turned out to be clinically significant. The prevalence of cannabinoid users we found in the current study, is considerably higher than the reported prevalence rate of prescribed cannabis in the general population (Stichting Farmaceutische Kengetallen ). This difference is mainly due to the fact that most patients rely on a different resource for their cannabinoids, as also reported in the current study. Additionally, the prevalence found in the current study, is higher than the most recently reported yearly prevalence of cannabis in the general population (7.5%) of the Netherlands (van Laar and van Gestel ), especially when specified for the age group of the current study (1.8%) (Trimbos Instituut ). These findings imply an increased susceptibility to engagement in consumption of cannabinoids among the oncology patients within this age group. The consumption of cannabinoids by cancer patients is significant, particularly when compared to the general population. The observation that over half of the patients that reported active use of cannabinoids, concurrently received immunotherapy, may be of concern. The immunomodulatory role of cannabinoid substances in cancer is not yet clear and their safety during immunotherapy is not guaranteed. Recent data from a prospective observational study by Bar-Sela et al., reported decreased time to progression and shorter overall survival for cannabis-users receiving immunotherapy treatment (Bar-Sela et al. ). Additionally, in an earlier retrospective observational study of Taha et al. ( ), consumption of cannabis was associated with reduced response rates to Nivolumab in patients with advanced cancer (Taha et al. ). Underlying mechanisms concern the number and functioning of available lymphocytes, which might be altered through exposure to cannabinoids (Bar-Sela et al. ). These findings suggest that adjunctive treatment with cannabinoids should be approached with caution. A commonly reported symptom for consumption of cannabinoids, was the treatment of pain, which could imply inadequate pain management or pain refractory to commonly used pain medication, including opioids. Although in total only 4 consumers reported an NRS score equal or higher than 5, information about additional pain medication was incomplete. Despite the limited evidence, the Dutch Guideline for Policy and Treatment of Pain in Cancer, advises to consider consumption of cannabinoids in pain refractory to other pain medication (Federatie Medisch Specialisten ). The consumption of cannabinoids to manage pain, is in line with previous studies, where also mainly the expected analgesic effect was reported as the motivation for usage (Donovan et al. ; Mousa et al. ; Pergam et al. ). However, whereas in other studies particularly THC was consumed (Martell et al. ; Mousa et al. ; Pergam et al. ), the current study reported CBD-oil as the most common way of consumption. However, THC is proposed to be the analgesic component in cannabinoid substances (Good et al. ; Hardy et al. ; MacDonald and Farrah ) (Whiting et al. ), rather than the CBD-component. Strikingly, of the patients who used the cannabinoids to treat pain, only 50% used a THC compound. Another great concern is the observation of the assumed anti-cancer effect of cannabinoids as driving motivation for use. With 46%, the percentage of patients in our study reporting this potential therapeutic effect as a reason for consumption, is higher than found in some previous studies, where values around 25–30% have been reported (Mousa et al. ; Pergam et al. ). Not only current or previous users adhere to this therapeutic believe of cannabinoids, but also a remarkable 20% of non-users reported considering using cannabinoids because of its proposed anti-cancerous effect. To date no results of large clinical trials have been published supporting the use of cannabinoids for anti-cancer purposes. Rather, potential evidence regarding effectivity has been limited to in vitro and in vivo studies (Abrams and Guzman ; Bouquié et al. ; Daris et al. ; Turgeman and Bar-Sela ). The role of social media seems to be a compelling factor in creating this believe, with news stories claiming cannabis as an alternative treatment to cure cancer being widely spread (Shi et al. ). The persisting profound believe in the assumed anti-cancerous effect in absence of clinical evidence, could be related to presumed unawareness by the clinicians regarding consumption of cannabinoids by their patients. This is in line with the low rates of reported usage in the patient files found in our study, underlining the lack of active involvement of the clinician towards this topic. This suggests that even in a country where cannabis consumption is relatively tolerated, cannabinoid consumption as alternative or adjunctive treatment is not yet a topic which is easily addressed by either the clinician or the patient, which is in agreement with previous research (Braun et al. ; Kleckner et al. ; Pergam et al. ). Absence of clinical guidance, however, leaves the patients relying on non-medical sources of information, with presumed lower degrees of clinical evidence (Zolotov et al. ), thereby potentially posing patients at medical risks. This does not only concern the anti-cancer believe, but also the potential risks related to concurrent immunotherapy. Furthermore, self-medicating might lead to consumption of products for purposes which are not actually served by the substance. Additionally, it increases the risk of overdosing products, leads to a higher risk of dependency (Fitzcharles and Eisenberg ; Hazekamp and Pappas ) and increases the risk of using polluted products. The startling amount of users among cancer patients in absence of clinical guidance, combined with patients’ presumed misperception regarding its purposes and utility, as shown in the current study, stresses the need for a change. It necessitates a diligent role of the clinician and addresses the urgency of adequate patient education on both the potential therapeutic benefits, as well as the eventual risks, of cannabinoid consumption in cancer (Donovan et al. ). This is especially important in light of the high number of users receiving immunotherapy, as revealed in the current study. Considering the observational cross-sectional design, this study has its limitations. Even though different types of tumors are represented in the study, the study might not completely reflect the diversity of tumors as presented in the general population, as a result of using a convenience sample, thereby increasing the risk of selection bias. Furthermore, even though the response rate might have been increased through the support of the researcher while filling in the questionnaire, it could have negatively affected the report of usage due to social desirability, thereby underreporting the actual prevalence of cannabinoid usage. Finally, statistical analysis was applied on a small sample size. Even though the study should be interpreted in light of these intrinsic limitations of the study, this study greatly emphasizes the uncertainties and difficulties regarding the current issue of cannabinoid usage among cancer patients. Recommendations for further research To encourage clinical guidance, this study comes with some recommendations for further research. First of all, more research is needed on patients’ perception concerning usage of cannabinoid substances. Up until now, relatively little is known about the clinical processes of decision making by patients, including how they access information (Braun et al. ). Additional research in the form of semi-structured interviews, would enable better understanding of patients’ perceptions of cannabis’ therapeutic effects, as well as its adverse effects, and creates awareness regarding their considerations for usage (Donovan et al. ; Zarrabi et al. ). This would aid clinicians in finding an effective dialog with patients with respect to this topic. Additionally, research on the safety and efficacy of medical cannabis is encouraged, to provide the doctors with adequate information to enable them to competently guide their patients, since many health care professionals reported to feel unprepared for this topic (Arboleda et al. ; Braun et al. ; Zolotov et al. ). Semi-structured interviews with medical specialist would be helpful in assessing their current beliefs. This is especially important in light of the great discordance between patient-perceived effects of cannabinoids and those effects retrieved from clinical trials, as also seen in the current study (Aviram et al. ; Good et al. ; Hardy et al. ).
To encourage clinical guidance, this study comes with some recommendations for further research. First of all, more research is needed on patients’ perception concerning usage of cannabinoid substances. Up until now, relatively little is known about the clinical processes of decision making by patients, including how they access information (Braun et al. ). Additional research in the form of semi-structured interviews, would enable better understanding of patients’ perceptions of cannabis’ therapeutic effects, as well as its adverse effects, and creates awareness regarding their considerations for usage (Donovan et al. ; Zarrabi et al. ). This would aid clinicians in finding an effective dialog with patients with respect to this topic. Additionally, research on the safety and efficacy of medical cannabis is encouraged, to provide the doctors with adequate information to enable them to competently guide their patients, since many health care professionals reported to feel unprepared for this topic (Arboleda et al. ; Braun et al. ; Zolotov et al. ). Semi-structured interviews with medical specialist would be helpful in assessing their current beliefs. This is especially important in light of the great discordance between patient-perceived effects of cannabinoids and those effects retrieved from clinical trials, as also seen in the current study (Aviram et al. ; Good et al. ; Hardy et al. ).
This study underlines the high rates cannabinoid use among cancer patients in the Netherlands and the existing challenges regarding motivation for usage, consumption characteristics and potential interaction with concurrent treatment. It also shows that simultaneously, the awareness among medical professionals regarding cannabinoid consumption by their patients is disturbingly low. This study highlights the importance of a pro-active role of the clinician, assessing usage of cannabinoid substances and adequately educating the patients on potential therapeutic benefits and risks, thereby preventing reliance on non-medical sources. Since data on efficacy and safety of cannabinoids is currently ambiguous, more research is required to enable competent patient education. Furthermore, additional research on attitudes of patients and their decision-making process through semi-structured interviews is recommended, to improve adequate clinical guidance.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 22 KB) Supplementary file2 (PDF 2010 KB)
|
Quality assurance and improvement in oncology using guideline-derived quality indicators – results of gynaecological cancer centres certified by the German cancer society (DKG)
|
14d376b1-93d4-48d4-b3f8-3874672842da
|
10097788
|
Gynaecology[mh]
|
For quality assurance and to implement evidence-based guideline recommendations effectively in everyday oncological care, a ‘Quality Cycle Oncology’ has been established in Germany. Its central elements are defined quality indicators (QIs) derived from strong recommendations of S3 oncological medical guidelines developed by the German Guideline Program in Oncology (GGPO) (Langer and Follmann ). The German S3 guidelines are based on a systematic literature review, the presence of a representative interdisciplinary and interprofessional expert panel, including patient advocacy groups, and the use of a formal consensus-building process (Langer and Follmann ; Nothacker et al. ). An obligatory part of every S3 guideline development process is the definition of QIs from strong recommendations. These are considered suitable as a quality standard since it can be assumed that most patients will gain a clear benefit from the addressed actions of these recommendations. In a multi-step process, interdisciplinary experts of the guideline group identify those strong recommendations of the S3 guideline whose comprehensive implementation improves the provision of care in a defined population and whose ‘translation’ to an indicator is possible (Langer et al. ). The implementation rate of these QIs, and thus the adherence to guideline recommendations, is monitored and evaluated through the certification system implemented by the German Cancer Society (DKG), which serves as one of the core elements of the quality assurance and improvement process for certified cancer centres (Langer et al. ). The results of the QIs are regularly fed back to the GGPO guideline groups to ensure the best possible exchange between the development of evidence- and consensus-based recommendations and clinical routine practice (Beckmann et al. ). In the context of guideline updates, the existing quality indicators are also subject to the updating process. Here, the results of the quality indicators are reviewed, and a decision is made as to whether the quality indicator must be retained or changed or, in the case of complete implementation, can be discontinued (Langer et al. ). As of January 2022, 31 tumour-specific and cross-sectional S3 guidelines had been published and 192 quality indicators derived. Thereof, 108 quality indicators are implemented in 18 tumour-specific certification procedures in a total of 1,715 certified centres, including 142 outside of Germany. In the present study, which was conducted within the scope of a qualifying thesis for a doctorate in medical science at the Charité University Medicine, we present an example from the gynaecological cancer centre (GCC) certification system of the German Cancer Society (DKG). The certification system for GCCs was developed in 2008 by the DKG and the Working Group for Gynaecological Oncology (Arbeitsgemeinschaft Gynäkologische Onkologie [AGO]) and the German Society for Gynaecology and Obstetrics (DGGG) (Leitlinienprogramm Onkologie. Deutsche Krebsgesellschaft, Deutsche Krebshilfe, AWMF): S3-Leitlinie Diagnostik, Therapie und Nachsorge maligner Ovarialtumoren ). As of 2019, a total of 164 GCCs had been certified (Krebsgesellschaft e.V. Jahresbericht der zertifizierten Gynäkolgoischen Krebszentren ), and about 55% of all patients in Germany with a first diagnosis (primary case) of a gynaecological tumour in 2019 were treated in these certified GCC (Krebsgesellschaft e.V. Jahresbericht der zertifizierten Gynäkolgoischen Krebszentren ). Many certified GCC have also joined together in the AGO's working group AG Ovar and are part of the AGO's quality assurance program (QS-OVAR). Gynaecological tumours consist of several entities that differ in incidence, therapy and prognosis. In 2017, approximately 38,000 women in Germany were diagnosed with a gynaecological neoplasm (Robert Koch Institut ). The GCCs, like all other cancer centres of the DKG, are multidisciplinary and interprofessional networks of qualified partners that represent the entire chain of health care. They commit themselves to adhering to the defined quality standards (i.e., minimum case numbers, tumour boards, high expertise of all network partners, etc.) and transparently disclose the results of their key performance indicators and guideline-derived quality indicators to demonstrate their quality of care and guideline adherence and discuss, if necessary, improvement measures (Mensah et al. ). Especially for gynaecological tumours, various studies have shown that the interdisciplinary cooperation and highly specialised surgical expertise of the clinic and surgeons as well as the surgical case volume have been of great benefit to patients and have had a relevant influence on the clinical outcome (Wright et al. ; Bristow et al. ; Bois et al. ; Munstedt et al. ). The focus of this study will be on two selected gynaecological tumours, namely ovarian and cervical cancers. For both tumour entities, S3 guidelines are available and regularly updated (Leitlinien Programm Onkologie (Deutsche Krebsgesellschaft, Deutsche Krebshilfe, AWMF). S3-Leitlinie Diagnostik, Therapie und Nachsorge maligner Ovarialtumoren; Leitlinien Programm Onkologie (Deutsche Krebsgesellschaft Deutsche Krebhilfe, AWMF). S3-Leitlinie Diagnostik, Therapie und Nachsorge der Patientin mit Zervixkarziom ), and in GCCs it has been obligatory to document QIs for these two entities since 2014 for OC and 2015 for CC. For endometrial and vulvar tumours, QIs have been implemented only recently, in 2018 and 2016, respectively, and no S3 guideline is yet available for vulvar carcinoma. Comprising 3.1% of all malignant neoplasms and 5.2% of all cancer deaths in women, ovarian cancer is the gynaecological cancer with the highest mortality rates (Wesselmann et al. ; Robert Koch Institut ), representing 19.2% of incident cases of gynaecological neoplasms (Robert Koch Institut ). Despite advances in screening and prevention measures, invasive cervical carcinoma, at 11.4% of cases, remains the third most common gynaecological neoplasm in women in Germany and worldwide (Robert Koch Institut ; Leitlinien Programm Onkologie (Deutsche Krebsgesellschaft Deutsche Krebshilfe, AWMF). Prävention des Zervixkarzinoms ). Using the example of QIs for ovarian and cervical cancer, this study set out to investigate the development of the implementation rate over time, report results for the time period between 2015 and 2019, evaluate the status of guideline-compliant care and identify areas and corresponding measures to foster improvement. A further goal of this paper is to raise awareness of the potential of guideline-based QIs and their results to contribute to quality assurance and improvement in the clinical routine. The aim is to initiate a discussion and thus jointly define actions and measures to improve health service delivery to ovarian and cervical cancer patients. Data collection Each GCC that intends to be (re-)certified must document fulfilment of the requirements. Annually, the results of key performance and quality indicators must be reported to OnkoZert, the independent certification institute that organizes the auditing procedure on behalf of the DKG. After collection from the centres, the datasets are analysed and tested for plausibility. Indicators mostly have target values or defined plausibility limits in which the certified centres have to give a mandatory statement of reasons as to why the limits were overstepped, i.e., in the case of deviation from the guideline recommendation. When target values or plausibility thresholds are reached, centres do not have to give explanations for patients not treated accordingly. For successful certification, cancer centres have to meet the target value or give a plausible explanation if they are not meeting the value (Adam et al. ). Centres are audited regularly by trained gynaecological oncologic medical experts who check the reported data from the previous calendar year before the audit and have insight into patient files during the audit to verify the data. Only verified data are published in the benchmarking reports. For example, 2019 data are audited during 2020 and published in 2021. The data presented here are based on the 2015–2019 patient cohort. Only data from centres that were certified throughout the complete year and had no change in the tumour documentation system are included. The QIs included in this study are derived according to a defined methodology (German Guideline Program in Oncology (German Cancer Society, German Cancer Aid, Association of the Scientific Medical Societies). Development of guideline-based quality indicators: methodology for the German Guideline Program in Oncology ) from the two evidence-based guidelines on the diagnosis, therapy and follow-up of malignant ovarian tumours and patients with cervical cancer published by the GGPO (Leitlinien Programm Onkologie (Deutsche Krebsgesellschaft, Deutsche Krebshilfe, AWMF). S3-Leitlinie Diagnostik, Therapie und Nachsorge maligner Ovarialtumoren ; Leitlininien Programm Onkologie (Deutsche Krebsgesellschaft Deutsche Krebhilfe, AWMF). S3-Leitlinie Diagnostik, Therapie und Nachsorge der Patientin mit Zervixkarziom ). The treatment guidelines, the corresponding QI and the QI set collected via the certification programme are regularly updated. In this analysis, only QIs that were included in the DKG dataset from 2014 onward and still included as of 2021 were taken into consideration. QIs that had been discontinued over time were not included in this analysis. An overview of discontinued QIs can be seen in (Table ). Data analyses Descriptive analysis of the case distribution, patient numbers and indicator definitions were performed. QI results for patients with cervical cancer (CC) and ovarian cancer (OC) treated in GCCs between 2015 and 2019 were analysed. Only patients from GCCs that had certified status over the entire time period were considered. The median proportion of the centres and overall proportion was calculated for every QI. Two-sided Cochran-Armitage tests were applied to detect trends over time. The standard deviations on the centre level over time were calculated to analyse fluctuations. Statistical analyses were performed using R version 3.5.1 and the Data-WhiteBox, a data analysis tool developed by OnkoZert. Cochran–Armitage tests were calculated using XLSTAT Version 2019.2.1, excluding centres that had missing values at any reporting point. A p -value ≤ 0.05 was considered statistically significant. The data analysis and study concept were reviewed and approved by the ethics committee of Charité University Medicine in November 2021. Each GCC that intends to be (re-)certified must document fulfilment of the requirements. Annually, the results of key performance and quality indicators must be reported to OnkoZert, the independent certification institute that organizes the auditing procedure on behalf of the DKG. After collection from the centres, the datasets are analysed and tested for plausibility. Indicators mostly have target values or defined plausibility limits in which the certified centres have to give a mandatory statement of reasons as to why the limits were overstepped, i.e., in the case of deviation from the guideline recommendation. When target values or plausibility thresholds are reached, centres do not have to give explanations for patients not treated accordingly. For successful certification, cancer centres have to meet the target value or give a plausible explanation if they are not meeting the value (Adam et al. ). Centres are audited regularly by trained gynaecological oncologic medical experts who check the reported data from the previous calendar year before the audit and have insight into patient files during the audit to verify the data. Only verified data are published in the benchmarking reports. For example, 2019 data are audited during 2020 and published in 2021. The data presented here are based on the 2015–2019 patient cohort. Only data from centres that were certified throughout the complete year and had no change in the tumour documentation system are included. The QIs included in this study are derived according to a defined methodology (German Guideline Program in Oncology (German Cancer Society, German Cancer Aid, Association of the Scientific Medical Societies). Development of guideline-based quality indicators: methodology for the German Guideline Program in Oncology ) from the two evidence-based guidelines on the diagnosis, therapy and follow-up of malignant ovarian tumours and patients with cervical cancer published by the GGPO (Leitlinien Programm Onkologie (Deutsche Krebsgesellschaft, Deutsche Krebshilfe, AWMF). S3-Leitlinie Diagnostik, Therapie und Nachsorge maligner Ovarialtumoren ; Leitlininien Programm Onkologie (Deutsche Krebsgesellschaft Deutsche Krebhilfe, AWMF). S3-Leitlinie Diagnostik, Therapie und Nachsorge der Patientin mit Zervixkarziom ). The treatment guidelines, the corresponding QI and the QI set collected via the certification programme are regularly updated. In this analysis, only QIs that were included in the DKG dataset from 2014 onward and still included as of 2021 were taken into consideration. QIs that had been discontinued over time were not included in this analysis. An overview of discontinued QIs can be seen in (Table ). Descriptive analysis of the case distribution, patient numbers and indicator definitions were performed. QI results for patients with cervical cancer (CC) and ovarian cancer (OC) treated in GCCs between 2015 and 2019 were analysed. Only patients from GCCs that had certified status over the entire time period were considered. The median proportion of the centres and overall proportion was calculated for every QI. Two-sided Cochran-Armitage tests were applied to detect trends over time. The standard deviations on the centre level over time were calculated to analyse fluctuations. Statistical analyses were performed using R version 3.5.1 and the Data-WhiteBox, a data analysis tool developed by OnkoZert. Cochran–Armitage tests were calculated using XLSTAT Version 2019.2.1, excluding centres that had missing values at any reporting point. A p -value ≤ 0.05 was considered statistically significant. The data analysis and study concept were reviewed and approved by the ethics committee of Charité University Medicine in November 2021. The number of certified GCCs increased steadily from 2015 to 2019 from 112 to 149, and the number of patients with a primary diagnosis of a gynaecological malignancy treated in GCCs increased from 11,587 to 14,986. Therefore, even though the incidence of OC and CC in Germany has been decreasing over time from 7318 to 7292 and 4606 to 4341, respectively (Robert Koch Institut ), the number of patients treated for these two tumour entities has increased in GCCs (OC: 3301–3798 and CC: 2059–2479) (Krebsgesellschaft and e.V. Jahresbericht der zertifizierten Gynäkolgoischen Krebszentren ). The indicators are defined and categorized in (Table ) including the numerator, denominator and plausibility corridor for the reported QI results. QIs were divided into two categories, (1) process organization (PO-QIs) and (2) treatment procedures (TP-QIs), to allow a differentiated analysis in order to identify areas and corresponding measures to foster improvement in the implementation rate. Process organization QIs are defined as indicators that document the implementation of processes and structures explicitly recommended by the medical guideline within the certified network. Treatment procedure QIs are defined as indicators that report on treatments performed by the members of the certified network, e.g., surgical interventions or recommendations for systemic therapies. Five QIs were included in the category treatment procedures (four for OC, one for CC) and four QIs in process organization (one for OC, three for CC). Table presents the results of 9 QIs (5 OC, 4 CC) from 75 GCCs treating 17,495 OC primary cases (incident cases) and 10,969 CC primary cases between 2015 and 2019. The implementation rate for PO-QIs that reflect the application of processes and structures either remained stable on a very high implementation level or increased steadily over time to a very high implementation level (e.g., CC: details in pathology report for lymphonodectomy—median 2015: 88.0% to 2019: 97.8%; OC: operation of advanced ovarian carcinoma by a gynaecological oncologist—median 2014: 100.0% to 2019 100.0%). The implementation rate for TP-QIs that report on treatment methods show an overall high implementation rate, yet the median fluctuates slightly over time (e.g., OC: macroscopic complete resection advanced OC—median 2014: 58.8%; 2015: 62.5%; 2016: 70.0%; 2017: 69.6%; 2018: 68.3.0%; 2019: 75.0%). Breaking down the TP-QI category further, TP-QIs that address recommendations for systemic therapy show a good to very good implementation rate; however, the analysis indicates that the median is not only fluctuating but decreasing over time (OC: post-operative chemotherapy advanced ovarian carcinoma—median 2014: 94.6% to 2019: 88.9%; OC: first-line chemotherapy of advanced ovarian carcinoma—median 2014: 69.2% to 2019: 60.1%). By contrast, the overall median for TP-QI results referring to surgical interventions show a good to very good implementation rate, which increased over the past 4 years. The median fluctuates over time (QI 1 surgical staging in early OC—median 2014: 75.0% to 2019: 81.8%; QI 2 macroscopic complete resection advanced OC—mean 2014: 58.8% to 2019: 75.0%). Calculating the SD using the annual QI quota of each centre, the overall mean SD of all QI was calculated and is displayed in a boxplot diagram in (Fig. a, b). Analysis of the implementation rate on the individual centre level shows that the results within one centre can vary over time. The mean SD for PO-QIs is the lowest, between 4.4 and 18.2 (e.g., QI 14 presentation at the tumour board CC, mean SD 4.4), the mean SD for TP-QIs that address systemic therapies lies between 11.8 and 16.2 (e.g., QI 12 post-operative chemotherapy for advanced OC, mean SD 11.8), and the mean SD for TP-QIs reporting surgical intervention is the highest, between 15.0 and 19.1 (e.g., QI 1 surgical staging early OC cumulative mean SD 19.1). The Cochran-Armitage test shows positive trends for five out of nine QI. Positive trends in both categories show four QIs in treatment procedures and one QI in process organization. Trend analyses were conducted over the course of 4 years for the QI 2 ‘macroscopic complete resection advanced OC’, QI 4 ‘postoperative chemotherapy advanced OC’ and QI 5 ‘first-line chemotherapy of advanced OC’. For QI 9 ‘cytological/histological lymph node staging’, the analysis was conducted over the course of 3 years. This article presents, for the first time, a differentiated overview of the implementation level and development of guideline-derived QI results for OC and CC in certified GCCs. The results of the evaluated QIs show that the recommendations of the guidelines are implemented to a high or very high extent in the certified GCCs. The quality of care is made visible, and results can be compared between centres. Grouping the analysed QIs into two categories—process organization and treatment procedures—offers the opportunity to assess the improvement potential of QIs in a differentiated way and allows identification of suitable measures for improvement, which can be implemented in the certified centres. QIs that reflect the implementation of processes and structures within the certified networks are very well applied. The results illustrate that QIs related to procedural aspects have a very high implementation rate (2019: QI 3: 100%; QI 6: 100%, QI 7: 92.3%; QI 8: 97.8%). The excellent implementation rate of this category of QIs has often been realized right from its introduction (e.g., QI 1 and QI 6 each 2015: 100% and 2019: 100%) and is maintained over time. For instance, mandating that surgical therapy for advanced ovarian cancer can only be performed by specialized gynaecologists not only improves outcomes and lengthens survival (Bois et al. ; Munstedt et al. ; Begg et al. ; Junor et al. ) but is also easily achievable via a top-down process arrangement. The same process can be applied within the network and to cooperation partners regarding implementation of QI 6 (tumour board presentation rate) and the definition of mandatory information to be included in pathology reports, such as initial diagnosis, tumour resection and, if applicable, indication that lymphadenectomy is complete (QI 7 and QI 8). These procedural QIs have a tremendous influence on the quality of patient care, while being relatively easy implementable in GCCs, e.g., through standard operating procedures and handling instructions. This is also shown by a consistently high implementation rate and low mean SD of the PO-QI on the individual centre level. Hence, in principle, these indicators and corresponding target values are easily reachable for every certified centre while taking into account justifiable individual cases such as emergency surgery, preventing presentation at the pre-therapeutic tumour board. In the case of repeated not-justifiable non-fulfilment of this indicator group, a ‘deviation’ in the audit will be given. An ultimate failure to fulfil the indicators can lead to withdrawal of the certificate. Results from QIs that report on treatment procedures such as surgical interventions and recommendations for systemic therapy present a slightly different picture. For evaluation of adherence to recommendations for treatment procedures, it must be considered that situations in routine care are very complex, and conclusions from raw QI data on quality of care are not readily possible (Junor et al. ). For example, QI results that do not reach a pre-defined threshold (target value) do not necessarily indicate insufficient performance on the part of the providers. Under such circumstances, additional information is needed to decide whether quality of care is adequate or not (Junor et al. ). Therefore, the given explanations by the certified centres are discussed with the auditor during the on-site audit and checked through random samples of patient files. If explanations of the centres seem not to be adequate, the auditors pronounce ‘deviations’ that need to be remedied by the centres (Kowalski et al. ). If the explanations are plausible and justifiable, no further action is required. QIs that call for the implementation of systemic therapies in line with the guideline recommendations show a good yet decreasing implementation rate over time in this analysis (QI 4: 2014 94.6% to 2019 88.9% and QI 5 2014 69.2% to 2019 60.3%). Explanations from the centres that fell below the target value included, for both QIs, mainly patient-related reasons (i.e., patient death after surgery, patient wish, existing comorbidities and/or poor general health, therapy termination due to side effects). For QI 5 (First-line chemotherapy of advanced OC) comorbidities and poor general health often also caused changes in therapy regimes. Patients being treated ex domo / outside the network as well as the time of data reporting (i.e., patients can only be counted in the numerator when the therapy is completed) were named as reasons why patients were missing even though the recommendations for chemotherapy was provided during the tumour boards. It must be kept in mind that written explanations only have to be provided in case the number of patients is below the threshold (QI 4 < 30%; QI 5 < 20%), i.e., if the overall number of eligible patients in the numerator or the median decreases but remains above the threshold, the certified GCCs do not have to provide a reason. Thus, based on this preliminary evaluation, it can be argued that in contrast to the results of the PO-QIs, the implementation rate for QIs documenting the application of systemic therapies reaches a plateau where the guideline recommendation is known to the practitioners, but patient-related factors prevent a further meaningful increase in the rate. Hence, fluctuations of the implementation rate and higher mean SD of these TP-QIs on the individual centre level are to be expected. The decreasing implementation rate could be in relation to an older age and/or the existence of multiple comorbidities and/or other therapy regimes. Unfortunately, this cannot be further explored with the present data set, as socio-demographic information and detailed information about comorbidities are not yet available or too superficial. By contrast, TP-QIs that report on surgical interventions offer more room for improvement measures. This set of QIs reflects not only patient-related factors (i.e., comorbidities, poor overall health status, patient rejection of surgery) but also the professional expertise of the surgical team. Surgical therapy is one of the fundamental pillars of the treatment strategy for OC and CC. Not only is it the most important diagnostic instrument; it also has a direct and strong influence on prognosis and is part of a mostly multimodal and interdisciplinary therapy concept (Sehouli et al. ). Like QIs reporting on systemic therapy, the data show an increase over time and also reach a plateau in the implementation rate (i.e., QI 1 2014: 75% to 2019 81.8%; QI 2 2014: 58.8% to 2019: 75.0%% and QI 9 2015 63.2% to 2019 72.9%). While keeping in mind that the denominator of the surgical QIs was often small, explanations for not meeting the Q9 (cytological/histological lymph node staging) target value mostly included the application of radio chemotherapy prior to cytological/histological lymph node staging. For QI 2 (macroscopic complete resection of advanced OC), the existence of multiple (distant) metastasis was given as the most frequent reason for an incomplete macroscopic resection. As reported above, some patients also decided to undergo the procedures outside of the certified network. However, besides patient-related topics, the most frequent reasons for not reaching the QI target value included inoperable situs due to advanced spreading of carcinoma or inter-operative assessment, which deemed the surgery as not possible. In the case of QI 2, it was stated several times that the tumour could only be reduced in size but not removed. The data unfortunately do not allow us to assess if other surgical teams would have come to different conclusions and assessments. During the audit, auditors and physicians of the GCC discuss if the results are justifiable, but explanations regarding the deviations are typically brief and often superficial (Inwald et al. ). The following further limitations need to be pointed out in the light of the data interpretation. Firstly, only aggregate data are submitted by the individual centres, hence assessment of individual patients’ information regarding case severity or socio-demographics is not possible. Secondly, the centres included in this analysis could be prone to a selection bias as often only centres that are already performing well join quality assurance programmes. Also, the data investigated here cannot be linked to survival data from registries. As for these QIs, the most relevant factors are the personal skills of the practitioners, and when these are combined with technical prerequisites, opportunities to identify measures for improvement are given. Thus, measures for improvement of the implementation rate of this QI set, besides the discussion of results amongst peers during the audit, could additionally include offers of surgical courses or coaching. Interestingly, the data also show that on the individual centre level, the results for macroscopic complete resection, sugical staging of early OC and cytological/hostological LN staging can vary widely from one year to another, with an overall standard deviation of up to 19. Reasons for these fluctuations cannot be provided with the currently available data. When interpreting the results, we must bear in mind the primary purpose of data collection, i.e., creating a basis for the decision of whether or not the certificate should be issued (Inwald et al. ). Further investigation is thus necessary. Notwithstanding, one hypothesis could be that, for instance, staff changes in the surgical team could explain why several centres with high indicator results in 1 year can have lower results in the forthcoming year. It could be argued that, meanwhile, the certified GCCs who maintain a constantly high implementation rate provide a good environment for surgeons in training and could be the ones selected to offer coaching courses for other GCCs. To achieve the best possible treatment outcomes for women with gynaecological malignancies, synergistic collaboration across all disciplines and professional groups involved in oncological care as well as the pursuit of specialization by physicians are important elements (Wesselmann et al. ). QIs support the establishment of guideline-based treatment in everyday clinical practice and motivate practitioners to critically reflect on their treatment results. In the audit procedures, these results are discussed, and measures are identified that enable better application of the guideline contents. The effectiveness of these measures is reviewed in the next audit 1 year later. The results of the QIs will be reported to the medical guideline development groups and provide information on how and to what extent a recommendation is implemented in everyday clinical practice and thus offer additional suggestions for further development of the guidelines. Furthermore, the results of this analysis, with a focus on ovarian and cervical cancer, suggest that dividing the analysed QI into two categories—process organization and treatment procedures—provides an opportunity to evaluate the QI improvement potential in different ways and allows the determination of appropriate improvement measures and therefore shows that a combination of different measures is necessary to anchor quality sustainably in health care and thus improve it.
|
A retrospective analysis of preemptive pharmacogenomic testing in 22,918 individuals from China
|
9fc29ab0-9b27-4017-a13b-5ac63951f849
|
10098050
|
Pharmacology[mh]
|
INTRODUCTION Individuals have different genetic makeups, which may influence the risk of disease development as well as responses to drugs and environmental factors. Variants of genes involved in drug metabolism, drug transport, and target binding are linked to interindividual differences in both the efficacy and toxicity of many medications. Indeed, hundreds of genes affecting medication metabolism have been reported, and the availability of genomic data is leading to the discovery of new interactions. , , The findings of these studies are compiled through curation efforts such as PharmGKB ( https://www.pharmgkb.org/ ). Precision medicine has the goal of exactly matching a therapeutic intervention with the patient's molecular profile. Pharmacogenomics (PGx) focuses on the involvement of genomics and genetics in drug responses by integrating pharmacological effects and genotype, , and by offering personalized drug selection and dosage based on an individual's genetics, PGx may revolutionize patient care. Overall, the practical value of PGx testing has increased as high‐impact haplotypes have been discovered and characterized. The Clinical Pharmacogenetics Implementation Consortium (CPIC; cpicpgx.org ) and other organizations assign a clinical function to star alleles based on published experimental research and create peer‐reviewed and evidence‐based clinical practice guidelines , to aid physicians in implementing pharmacogenetics into clinical practice. Pharmacogenomics testing can be preemptive before prescription, or reactive in response to treatment failure or an adverse drug reaction. Preemptive PGx testing is the availability of information before the time of prescription, allowing this to be personalised to the patients when needed. A preemptive, panel‐based approach is increasingly playing an important role in supporting the use of genotype‐guided prescribing in clinical practice and the genetic information can be coupled to the patient's medical record to inform future drug therapy. Consequently, there are many programs around the world to support this research. In a previous study of five drug genomes in over 10,000 patients, the race/ethnicity of the majority of the cohort was European American, and a multiplexed test revealed an actionable variant in 91% of genotyped patients. Additionally, a Danish study involving 77,684 individuals with 42 clinically relevant variants and CYP2D6 gene deletion and duplication showed that almost all individuals carried at least one genetic variant (>99.9%), with 87% harboring three or more. PGx testing data of 1141 samples by exome sequencing in Hong Kong China revealed that 99.6% of subjects carried at least one such variant. The China Metabolic Analytics Project (ChinaMAP) was designed to comprehensively characterize the diverse genetic architectures of Han Chinese and other major ethnic minorities across different geographical areas and investigate their contribution to metabolic diseases as well as a broad spectrum of biomedically relevant quantitative traits. This project also studied the genetic diversity of some important PGx genes, such as genes related to the dosage of warfarin and clopidogrel. Furthermore, a comparison in European countries showed that race influences dose changes associated with genetic factors. Nevertheless, there has not been a study on large samples of a preemptive, panel‐based PGx testing in mainland China. The broad impact of diverse geographic distribution on PGx testing is not well understood in China. In this study, we retrospectively analyzed preemptive PGx testing data of 22,918 participants from 20 provinces of China. The PGx testing was performed by a 52‐gene targeted next‐generation sequencing (NGS) PGx panel, which covered 100 SNPs of 52 genes, and a full gene deletion of CYP2D6 (Table ). Of 52 genes targeted by the panel, 15 genes were involved in CPIC guidelines for 31 drugs, including CYP2C9 , SLCO1B1 , CYP2C19 , CYP2D6 , VKORC1 , CYP4F2 , G6PD , NUDT15 , CYP3A5 , IFNL4 , TPMT , HLA‐A , HLA‐B , UGT1A1 , and MT‐RNR1 (Table ). , , , , , , , , , , , , , , , , We utilized sequencing results of these 15 genes to find the opportunity for pharmacogenomic‐guided prescribing for 31 drugs according to CPIC guidelines. The other 37 genes, whose sequencing results were not interpreted by CPIC guidelines, were used for allele frequency analysis and quality control. Using this panel, we performed preemptive PGx testing for 22,918 subjects from 20 provinces in China. The results could provide evidence to evaluate the value of preemptive PGx testing and to optimize clinical practice in China. MATERIALS AND METHODS 2.1 Study subjects We retrospectively analyzed preemptive PGx testing data of subjects from 20 provinces in China from May 2019 to April 2022. These subjects were referred to preemptive PGx testing by physicians as part of their health care. They (or their guardians who <18 years) agreed to receive the preemptive PGx testing and signed informed consent after consulting physicians. Blood samples were collected from 23,199 consecutive, unrelated individuals and transported to CapitalBio Medical Laboratory for PGx testing. Two hundred eighty‐one low‐quality samples were filtered after quality control. Thus, 22,918 qualified samples from 22,918 subjects were included in this study. Of these subjects, there were 13,805 (60.29%) < 18 years, 6789 (29.57%) >= 18 years and 2324 (10.14%) without age information (Table ). Over 90% (12,782/13,805) of those under 18 years were newborns, and they received preemptive PGx testing during neonatal screening. In this study, our main objective was to research the genetic information. Thus, age distribution will not affect our conclusions. The data were deidentified prior to further analysis. This study was approved by the ethics committee of People's Hospital of Yangjiang (No. 20210047). 2.2 PGx panel MagPure tissue and blood DNA LQ kit (Magen Biotechnology) was used for DNA extraction from blood samples. Target region amplification and sequencing library construction were then performed by a multiplex‐PCR method using a library construction kit for PGx (CapitalBio Genomics) following the manufacturer's protocol. The process is briefly described as follows. First, the sequences of the target regions were amplified using gene‐specific primers. Second, library construction was carried out using library construction primers that added sequencing adapters to both ends of the product. The sequencing adapter contains a barcode sequence of 8–10 bp for distinguishing different samples. Third, the sequencing library was purified by AMPure XP beads (Beckman Coulter), and quantified by Qubit (Thermo Fisher Scientific). At last, all libraries were sequenced according to the standard 200‐bp single‐end sequencing procedure of the BioelectronSeq 4000 sequencing system (National Medical Products Administration registration permit NO. 20203220502), which utilizes the same sequencing principle as the Ion Proton sequencer (Thermo Fisher Scientific). Each sequencing run is able to process 120 samples. The CYP2D6 full gene deletion was detected by a long‐PCR method as described by Hersberger et al. 2.3 Sequencing data analysis Raw data were filtered using a homemade pipeline to exclude reads shorter than 70 bp or with more than 50% low‐quality bases (quality score < 20), providing high‐quality clean reads. TMAP ( https://github.com/iontorrent/TAMP , version 5.4.11) was used to map the clean reads to the hg38 version of the human reference genome with “mapall” and “map4” parameters and to obtain bam files. The bam files were compressed, sorted and indexed using samtools ( http://samtools.sourceforge.net/ , version 1.2). For sorted bam files, Torrent Variant Caller ( https://github.com/LeeBergstrand/Torrent_Variant_Caller , version 4.4.2.1) was used for SNP/Indel calling, and the “hotspot‐vcf” parameter was selected to detect variants for targeted loci. 2.4 Validation of PGx panel by Sanger sequencing We designed amplification primers for all 100 SNPs detected by PGx panel (Table ) using Primer Premier V5.0. The length of amplicons was limited to 200~800 bp. The target sites were away from forward/reverse primers>60 bp. The reaction was carried out in 50 μL volume containing DNA template X μL (X ≤ 19 and 30 ng < total DNA quality<100 ng), forward primer 2 μL, reverse primer 2 μL, Phanta Mix (Vazyme Biotech, including Phanta Max Super‐Fidelity DNA Polymerase, Phanta Max Buffer, and dNTP) 27 μL, nuclease‐free water (Thermo Fisher Scientific) (19‐X) μL. The following PCR conditions were used: initial denaturation at 95°C for 3 min, followed by 35 cycles consisting of denaturation (95°C for 15 s), annealing (65°C for 15 s, decreased 0.5°C per cycle before 55°C) and extension (72°C for 1 min) and a final step at 72°C for 5 min. The PCR products were sequenced by a 3730XL DNA analyzer (Thermo Fisher Scientific) following the manufacturer's protocol. 2.5 Star allele analysis and phenotype prediction Star allele analysis of CYP2D6 , CYP2C19 , CYP2C9 , CYP3A5 , UGT1A1 , NUDT15 , and TPMT was performed using the tag SNP method. First, a diploid combination of all detected alleles was constructed, after which the allele combination of the sample based on SNP test results was determined. For CYP2D6 , six tag SNPs were used to detect six important star alleles: rs1135840, rs16947 for *2; rs3892097 for *4; rs1135840, rs1065852 for *10; rs1135840, rs16947, rs5030865 for *14; rs1135840, rs28371725, rs16947 for *41 and wild‐type for *1. CYP2D6 *5 allele were detected by a long‐PCR method as described in section 2.2. When the long‐PCR results indicated that one or two copies of CYP2D6 were missing, the genotypes were adjusted accordingly. For the other six genes, we have used 14 tag SNPs to detect 22 star alleles (Table ). The enzyme activity scoring table provided by CPIC was used for phenotype prediction. 2.6 CPIC recommendations Clinical Pharmacogenetics Implementation Consortium guidelines for 31 drugs covering 15 genes were applied to interpret the genetic data of each sample (Table ). For each gene, actionable genotypes were defined as genotypes required to change the medication strategy of at least one drug according to CPIC guidelines, including alternative drug, decreased dose, and increased dose (Table ). Among actionable genotypes, those required an alternative drug according to CPIC guidelines were defined as high‐risk genotypes(Table ). Besides, high‐risk ratio of a drug was calculated as the ratio of subjects who carry high‐risk genotypes for the drug by the CPIC guidelines. For example, there were 2737 subjects in all provinces who carried at least one copy of either HLA‐A*31:01 or HLA‐B*15:02 and were recommended to use an alternative drug instead of carbamazepine, so the high‐risk ratio of carbamazepine was 11.94% (2737/22,198). In order to study intra‐country differences of high‐risk ratios, risk ratio (RR) of each drug in each province was calculated as follows: RR of a drug in a province = high − risk ratio of the drug in a province high − risk ratio of the drug in all provinces . For example, the high‐risk ratio of carbamazepine in HAINAN province was 17.48% (18/103) and that in all provinces was 11.94% as mentioned above, so the RR of carbamazepine in HAINAN province was 1.46 (17.48%/11.94%). 2.7 Statistical method PLINK with “‐indep‐pairwise 50 5 0.5 ‐‐file data/my‐noweb” parameters was used to filter linked gene sites before performing clustering and principle component analysis (PCA). The pheatmap method in R was used for clustering with “average” method. PCA in the python‐based sklearn package was used for analysis. Frequencies and ratios were compared by the Fisher's exact test using python v3.8.13 with the “scipy.stats” package, and a p value of <0.01 was considered statistically significant. Study subjects We retrospectively analyzed preemptive PGx testing data of subjects from 20 provinces in China from May 2019 to April 2022. These subjects were referred to preemptive PGx testing by physicians as part of their health care. They (or their guardians who <18 years) agreed to receive the preemptive PGx testing and signed informed consent after consulting physicians. Blood samples were collected from 23,199 consecutive, unrelated individuals and transported to CapitalBio Medical Laboratory for PGx testing. Two hundred eighty‐one low‐quality samples were filtered after quality control. Thus, 22,918 qualified samples from 22,918 subjects were included in this study. Of these subjects, there were 13,805 (60.29%) < 18 years, 6789 (29.57%) >= 18 years and 2324 (10.14%) without age information (Table ). Over 90% (12,782/13,805) of those under 18 years were newborns, and they received preemptive PGx testing during neonatal screening. In this study, our main objective was to research the genetic information. Thus, age distribution will not affect our conclusions. The data were deidentified prior to further analysis. This study was approved by the ethics committee of People's Hospital of Yangjiang (No. 20210047). PGx panel MagPure tissue and blood DNA LQ kit (Magen Biotechnology) was used for DNA extraction from blood samples. Target region amplification and sequencing library construction were then performed by a multiplex‐PCR method using a library construction kit for PGx (CapitalBio Genomics) following the manufacturer's protocol. The process is briefly described as follows. First, the sequences of the target regions were amplified using gene‐specific primers. Second, library construction was carried out using library construction primers that added sequencing adapters to both ends of the product. The sequencing adapter contains a barcode sequence of 8–10 bp for distinguishing different samples. Third, the sequencing library was purified by AMPure XP beads (Beckman Coulter), and quantified by Qubit (Thermo Fisher Scientific). At last, all libraries were sequenced according to the standard 200‐bp single‐end sequencing procedure of the BioelectronSeq 4000 sequencing system (National Medical Products Administration registration permit NO. 20203220502), which utilizes the same sequencing principle as the Ion Proton sequencer (Thermo Fisher Scientific). Each sequencing run is able to process 120 samples. The CYP2D6 full gene deletion was detected by a long‐PCR method as described by Hersberger et al. Sequencing data analysis Raw data were filtered using a homemade pipeline to exclude reads shorter than 70 bp or with more than 50% low‐quality bases (quality score < 20), providing high‐quality clean reads. TMAP ( https://github.com/iontorrent/TAMP , version 5.4.11) was used to map the clean reads to the hg38 version of the human reference genome with “mapall” and “map4” parameters and to obtain bam files. The bam files were compressed, sorted and indexed using samtools ( http://samtools.sourceforge.net/ , version 1.2). For sorted bam files, Torrent Variant Caller ( https://github.com/LeeBergstrand/Torrent_Variant_Caller , version 4.4.2.1) was used for SNP/Indel calling, and the “hotspot‐vcf” parameter was selected to detect variants for targeted loci. Validation of PGx panel by Sanger sequencing We designed amplification primers for all 100 SNPs detected by PGx panel (Table ) using Primer Premier V5.0. The length of amplicons was limited to 200~800 bp. The target sites were away from forward/reverse primers>60 bp. The reaction was carried out in 50 μL volume containing DNA template X μL (X ≤ 19 and 30 ng < total DNA quality<100 ng), forward primer 2 μL, reverse primer 2 μL, Phanta Mix (Vazyme Biotech, including Phanta Max Super‐Fidelity DNA Polymerase, Phanta Max Buffer, and dNTP) 27 μL, nuclease‐free water (Thermo Fisher Scientific) (19‐X) μL. The following PCR conditions were used: initial denaturation at 95°C for 3 min, followed by 35 cycles consisting of denaturation (95°C for 15 s), annealing (65°C for 15 s, decreased 0.5°C per cycle before 55°C) and extension (72°C for 1 min) and a final step at 72°C for 5 min. The PCR products were sequenced by a 3730XL DNA analyzer (Thermo Fisher Scientific) following the manufacturer's protocol. Star allele analysis and phenotype prediction Star allele analysis of CYP2D6 , CYP2C19 , CYP2C9 , CYP3A5 , UGT1A1 , NUDT15 , and TPMT was performed using the tag SNP method. First, a diploid combination of all detected alleles was constructed, after which the allele combination of the sample based on SNP test results was determined. For CYP2D6 , six tag SNPs were used to detect six important star alleles: rs1135840, rs16947 for *2; rs3892097 for *4; rs1135840, rs1065852 for *10; rs1135840, rs16947, rs5030865 for *14; rs1135840, rs28371725, rs16947 for *41 and wild‐type for *1. CYP2D6 *5 allele were detected by a long‐PCR method as described in section 2.2. When the long‐PCR results indicated that one or two copies of CYP2D6 were missing, the genotypes were adjusted accordingly. For the other six genes, we have used 14 tag SNPs to detect 22 star alleles (Table ). The enzyme activity scoring table provided by CPIC was used for phenotype prediction. CPIC recommendations Clinical Pharmacogenetics Implementation Consortium guidelines for 31 drugs covering 15 genes were applied to interpret the genetic data of each sample (Table ). For each gene, actionable genotypes were defined as genotypes required to change the medication strategy of at least one drug according to CPIC guidelines, including alternative drug, decreased dose, and increased dose (Table ). Among actionable genotypes, those required an alternative drug according to CPIC guidelines were defined as high‐risk genotypes(Table ). Besides, high‐risk ratio of a drug was calculated as the ratio of subjects who carry high‐risk genotypes for the drug by the CPIC guidelines. For example, there were 2737 subjects in all provinces who carried at least one copy of either HLA‐A*31:01 or HLA‐B*15:02 and were recommended to use an alternative drug instead of carbamazepine, so the high‐risk ratio of carbamazepine was 11.94% (2737/22,198). In order to study intra‐country differences of high‐risk ratios, risk ratio (RR) of each drug in each province was calculated as follows: RR of a drug in a province = high − risk ratio of the drug in a province high − risk ratio of the drug in all provinces . For example, the high‐risk ratio of carbamazepine in HAINAN province was 17.48% (18/103) and that in all provinces was 11.94% as mentioned above, so the RR of carbamazepine in HAINAN province was 1.46 (17.48%/11.94%). Statistical method PLINK with “‐indep‐pairwise 50 5 0.5 ‐‐file data/my‐noweb” parameters was used to filter linked gene sites before performing clustering and principle component analysis (PCA). The pheatmap method in R was used for clustering with “average” method. PCA in the python‐based sklearn package was used for analysis. Frequencies and ratios were compared by the Fisher's exact test using python v3.8.13 with the “scipy.stats” package, and a p value of <0.01 was considered statistically significant. RESULTS 3.1 Study subjects and geographic distribution A total of 22,918 subjects (male 11,455/49.98% vs. female 11,463/50.02%) were included in this study, which covered 20 provinces across China. After clustering analysis with mutant allele frequencies (Table ), 20 provinces were divided into three groups (Figure ), which are geographically distributed from north to south in China (Figure ). The north group (indicated in blue in Figure ) includes 12 provinces as follows: JILIN, LIAONING, INNERMOGOLIA, BEIJING, HEBEI, SHANXI, GANSU, SHAANXI, SHANDONG, HENAN, ANHUI, and JIANGSU. The middle group (indicated in pink in Figure ) includes six provinces as follows: HUBEI, SICHUAN, HUNAN, FUJIAN, GUANGDONG, and YUNNAN. The south group (indicated in yellow in Figure ) includes GUANGXI and HAINAN. 3.2 Qualitity control and validation A total of 100 variants in 52 genes (Table ) were detected by high‐depth sequencing with a mean depth higher than 1000×. The lowest depth of each site for each sample was 30× (Figure ). In order to validate the PGx panel, we performed Sanger sequencing for samples with different genotypes of each targeted gene loci. In total, we performed 488 Sanger sequencing reactions for validated samples with PGx panel results, including 187 homozygous wild‐types, 152 heterozygous variants, and 149 homozygous variants (Table ). All genotypes determined by Sanger sequencing were accordant with the PGx panel. 3.3 PCA of the mutant allele frequencies Principal component analysis was performed with mutant allele frequencies (Table ). At the same time, we also added five datasets of the 1000 Genomes Project including CHB (Han Chinese in Beijing, China), CHS (Han Chinese South, China), CDX (Chinese Dai in Xishuangbanna, China), KHV (Kinh in Ho Chi Minh City, Vietnam), and JPT (Japanese in Tokyo, Japan). Both clustering and PCA results support dividing 20 provinces into three groups according to the geographic distribution (Figure ). 3.4 Proportion of subjects carrying actionable genotypes Of the 22,918 subjects, 99.97% carried at least one actionable genotype in these 15 genes, as depicted in Figure (blue line). The number of genes with actionable genotypes per subject ranged from 0 to 10, with a median of 4. The distribution of the number of drugs with atypical dosage recommendations per subject was illustrated in the orange histogram in Figure . The median number was 8. It means the subjects carry actionable genotypes leading to atypical dosage recommendations for a median of 8 drugs by CPIC guidelines. In addition, we evaluated detection ratios of actionable genotypes for the 15 genes in each province and observed the highest ratio of over 99% on the VKORC1 gene in almost all 20 provinces (Table ). 3.5 Frequency of star alleles and predicted phenotypes We analyzed the spectrum of common star alleles and predicted phenotypes for the seven important PGx genes including CYP2D6 , CYP2C19 , CYP2C9 , CYP3A5 , UGT1A1 , NUDT15 , and TPMT . For CYP2D6 , the star alleles included in this study were *1, *2, *4, *5, *10, *14, and *41. The most common star allele was *10, whose frequency was 46.60% in all samples. The population in LIAONING and GUANGXI had the lowest (43.54%) and highest (66.53%) *10 allele frequency respectively (Table ). Further, we predicted the phenotype of each sample based on its genotype according to the enzyme activity score by CIPC guidelines. There were three types of phenotypes we predicted in this study, including normal metaboliser (NM), intermediate metaboliser (IM) and poor metaboliser (PM). NMs, IMs and PMs accounted for 59.87%, 39.94% and 0.18% of all subjects respectively (Table ). For CYP2C19 , the alleles included in the analysis were *1, *2, *3, and *17. The *1 allele had a nationwide frequency of 63.46%, which was the most common allele. In different provinces, the frequencies of *1 allele ranged from 55.50% to 73.73%.The highest frequency was observed in GUANGXI, while the lowest one was in HUNAN (Table ). The predicted phenotypes of CYP2C19 included PM, IM, NM, rapid metaboliser (RM) and UM. For CYP2C9 , over 95% of alleles were determined as *1 allele, thus over 90% (20,814/22,918) of samples were predicted as NM. For CYP3A5 , the frequency of *1 and *3 was 28.20% and 71.80%, respectively. PMs and IMs of CYP3A5 accounted for 51.61% and 40.37% of all samples, respectively. For UGT1A1 , the most frequent allele was *1, whose frequency was 67.48%. The proportion of IMs and NMs was similar in all samples (44.24% vs 45.33%). For NUDT15 and TPMT , NM (72.45%, 96.37%, respectively) was the predominant predicted phenotype in these two genes (Tables and ). 3.6 CPIC therapeutic recommendations for 31 drugs Clinical Pharmacogenetics Implementation Consortium therapeutic recommendations for 31 drugs based on the 15 genes included in this study were shown in Figure . When only considering genetic factors, 99.33% of participants were subjected to a decreased warfarin dose by CPIC guidelines. We defined the high‐risk ratio of a drug as the ratio of participants who were recommended an alternative drug by the CPIC guidelines as described in Materials and Methods. Of 31 drugs with CPIC guidelines, 20 have recommendations for an alternative drug when subjects carry some specific genotypes (i.e., high‐risk genotypes). The high‐risk ratios of these 20 drugs ranged from 0.18% to 58.25%. Clopidogrel had the highest high‐risk ratio of 58.25%, which means only 41.75% of the subjects in the present study were recommended to use clopidogrel under normal risk (Figure ). 3.7 Distribution of high‐risk ratios in different provinces Since there existed orders of magnitude differences among high‐risk ratios of 20 drugs (0.18%–58.25%), we used RRs to study intra‐country differences. RR of a drug in a province equals the high‐risk ratio in the province divided by the average high‐risk ratio in all 20 provinces as described in Materials and Methods. Thus, we obtained RRs of the abovementioned 20 drugs in each province. Then we draw a heatmap shown in Figure . The highest RR (23.44, 95%CI:8.83–52.85) was the one of rasburicase in GUANGXI, which means the high‐risk ratio of rasburicase in GUANGXI was more than 23 times the average ratio in all provinces. We also performed Fisher's exact test between the high‐risk ratio in GUANGXI and that in all provinces and found that the high‐risk ratio of rasburicase in GUANGXI was significantly higher ( p < 0.001). Similarly, the second highest RR (13.17, 95%CI:4.06–33.22) was that of rasburicase in GUANGDONG. The high‐risk ratio of rasburicase in GUANGDOND was significantly higher than that in all provinces ( p < 0.001). In addition, desipramine, paroxetine, and codeine had the same RR in HENAN (12.59, 95%CI:2.52–41.24) and the high‐risk ratio was significantly higher than that in all provinces ( p < 0.01). Study subjects and geographic distribution A total of 22,918 subjects (male 11,455/49.98% vs. female 11,463/50.02%) were included in this study, which covered 20 provinces across China. After clustering analysis with mutant allele frequencies (Table ), 20 provinces were divided into three groups (Figure ), which are geographically distributed from north to south in China (Figure ). The north group (indicated in blue in Figure ) includes 12 provinces as follows: JILIN, LIAONING, INNERMOGOLIA, BEIJING, HEBEI, SHANXI, GANSU, SHAANXI, SHANDONG, HENAN, ANHUI, and JIANGSU. The middle group (indicated in pink in Figure ) includes six provinces as follows: HUBEI, SICHUAN, HUNAN, FUJIAN, GUANGDONG, and YUNNAN. The south group (indicated in yellow in Figure ) includes GUANGXI and HAINAN. Qualitity control and validation A total of 100 variants in 52 genes (Table ) were detected by high‐depth sequencing with a mean depth higher than 1000×. The lowest depth of each site for each sample was 30× (Figure ). In order to validate the PGx panel, we performed Sanger sequencing for samples with different genotypes of each targeted gene loci. In total, we performed 488 Sanger sequencing reactions for validated samples with PGx panel results, including 187 homozygous wild‐types, 152 heterozygous variants, and 149 homozygous variants (Table ). All genotypes determined by Sanger sequencing were accordant with the PGx panel. PCA of the mutant allele frequencies Principal component analysis was performed with mutant allele frequencies (Table ). At the same time, we also added five datasets of the 1000 Genomes Project including CHB (Han Chinese in Beijing, China), CHS (Han Chinese South, China), CDX (Chinese Dai in Xishuangbanna, China), KHV (Kinh in Ho Chi Minh City, Vietnam), and JPT (Japanese in Tokyo, Japan). Both clustering and PCA results support dividing 20 provinces into three groups according to the geographic distribution (Figure ). Proportion of subjects carrying actionable genotypes Of the 22,918 subjects, 99.97% carried at least one actionable genotype in these 15 genes, as depicted in Figure (blue line). The number of genes with actionable genotypes per subject ranged from 0 to 10, with a median of 4. The distribution of the number of drugs with atypical dosage recommendations per subject was illustrated in the orange histogram in Figure . The median number was 8. It means the subjects carry actionable genotypes leading to atypical dosage recommendations for a median of 8 drugs by CPIC guidelines. In addition, we evaluated detection ratios of actionable genotypes for the 15 genes in each province and observed the highest ratio of over 99% on the VKORC1 gene in almost all 20 provinces (Table ). Frequency of star alleles and predicted phenotypes We analyzed the spectrum of common star alleles and predicted phenotypes for the seven important PGx genes including CYP2D6 , CYP2C19 , CYP2C9 , CYP3A5 , UGT1A1 , NUDT15 , and TPMT . For CYP2D6 , the star alleles included in this study were *1, *2, *4, *5, *10, *14, and *41. The most common star allele was *10, whose frequency was 46.60% in all samples. The population in LIAONING and GUANGXI had the lowest (43.54%) and highest (66.53%) *10 allele frequency respectively (Table ). Further, we predicted the phenotype of each sample based on its genotype according to the enzyme activity score by CIPC guidelines. There were three types of phenotypes we predicted in this study, including normal metaboliser (NM), intermediate metaboliser (IM) and poor metaboliser (PM). NMs, IMs and PMs accounted for 59.87%, 39.94% and 0.18% of all subjects respectively (Table ). For CYP2C19 , the alleles included in the analysis were *1, *2, *3, and *17. The *1 allele had a nationwide frequency of 63.46%, which was the most common allele. In different provinces, the frequencies of *1 allele ranged from 55.50% to 73.73%.The highest frequency was observed in GUANGXI, while the lowest one was in HUNAN (Table ). The predicted phenotypes of CYP2C19 included PM, IM, NM, rapid metaboliser (RM) and UM. For CYP2C9 , over 95% of alleles were determined as *1 allele, thus over 90% (20,814/22,918) of samples were predicted as NM. For CYP3A5 , the frequency of *1 and *3 was 28.20% and 71.80%, respectively. PMs and IMs of CYP3A5 accounted for 51.61% and 40.37% of all samples, respectively. For UGT1A1 , the most frequent allele was *1, whose frequency was 67.48%. The proportion of IMs and NMs was similar in all samples (44.24% vs 45.33%). For NUDT15 and TPMT , NM (72.45%, 96.37%, respectively) was the predominant predicted phenotype in these two genes (Tables and ). CPIC therapeutic recommendations for 31 drugs Clinical Pharmacogenetics Implementation Consortium therapeutic recommendations for 31 drugs based on the 15 genes included in this study were shown in Figure . When only considering genetic factors, 99.33% of participants were subjected to a decreased warfarin dose by CPIC guidelines. We defined the high‐risk ratio of a drug as the ratio of participants who were recommended an alternative drug by the CPIC guidelines as described in Materials and Methods. Of 31 drugs with CPIC guidelines, 20 have recommendations for an alternative drug when subjects carry some specific genotypes (i.e., high‐risk genotypes). The high‐risk ratios of these 20 drugs ranged from 0.18% to 58.25%. Clopidogrel had the highest high‐risk ratio of 58.25%, which means only 41.75% of the subjects in the present study were recommended to use clopidogrel under normal risk (Figure ). Distribution of high‐risk ratios in different provinces Since there existed orders of magnitude differences among high‐risk ratios of 20 drugs (0.18%–58.25%), we used RRs to study intra‐country differences. RR of a drug in a province equals the high‐risk ratio in the province divided by the average high‐risk ratio in all 20 provinces as described in Materials and Methods. Thus, we obtained RRs of the abovementioned 20 drugs in each province. Then we draw a heatmap shown in Figure . The highest RR (23.44, 95%CI:8.83–52.85) was the one of rasburicase in GUANGXI, which means the high‐risk ratio of rasburicase in GUANGXI was more than 23 times the average ratio in all provinces. We also performed Fisher's exact test between the high‐risk ratio in GUANGXI and that in all provinces and found that the high‐risk ratio of rasburicase in GUANGXI was significantly higher ( p < 0.001). Similarly, the second highest RR (13.17, 95%CI:4.06–33.22) was that of rasburicase in GUANGDONG. The high‐risk ratio of rasburicase in GUANGDOND was significantly higher than that in all provinces ( p < 0.001). In addition, desipramine, paroxetine, and codeine had the same RR in HENAN (12.59, 95%CI:2.52–41.24) and the high‐risk ratio was significantly higher than that in all provinces ( p < 0.01). DISCUSSION In this study, the vast majority (99.97%) of 22,918 individuals had at least one actionable genotype for 15 genes; in a previous study, 91% of individuals had at least one actionable genotype. Studies have also shown that almost all individuals have one or more actionable pharmacogenetic polymorphism(s). , Thus, evidence to date highlights the utility and potential benefit of panel‐based genotyping for pharmacogenomic testing. The number of actionable genotype of PGx genes per subject ranged from 0 to 10, with a mean of 4, in this study, which is consistent with a previous report. Furthermore, we found that overall, the participants harbored pharmacogenetic alleles that lead to atypical therapeutic recommendations by CPIC for a median of 8 drugs, indicating the value of such analysis for individuals taking multiple medications. It may be expected that testing a large number of pharmacogenes and drugs will identify clinically important variants and atypical drug responses in many subjects, and the degree of impact may be underestimated. PGx testing might optimize treatment and eliminate adverse drug response events (ADEs) by utilizing an appropriate drug at the right dose and at the right time. Preemptive PGx testing may be a crucial component of precision medicine in the postgenomic era. There is a substantial significance for pharmacogenomic‐guided medication prescription, and alternatives could be prescribed according to CPIC guidelines for 20 drugs when subjects carry high‐risk genotypes. Among these drugs, the high‐risk ratio of clopidogrel reached 58.25% in this study. It means 58.25% of subjects should be recommended an alternative drug instead of clopidogrel. Gregory's latest study reported a high‐risk ratio for clopidogrel of 29.6%, which was significantly lower than this study (29.6% vs. 58.25%, p < 0.001). Clopidogrel is a thienopyridine prodrug that requires hepatic biotransformation to form the active metabolite, and this conversion requires two sequential oxidation steps involving several CYP enzymes (e.g. CYP2C19 ). Only UMs (*1/*17, *17/*17) and EMs (*1/*1) of CYP2C19 are able to use clopidogrel at normal risk, whereas the CPIC recommends alternative antiplatelet therapy for IMs (*1/*2, *1/*3, *17/*2, *17/*3) and PMs (*2/*2, *2/*3, *3/*3). UMs and EMs occupied only a small percentage of the population in the present study in China. In contrast, Gregory's study included African, East Asian, European, and South Asian populations, and the proportion of UM and EM proportion was much higher than in this study. Another study in America showed that the distribution of metabolism phenotypes was 4.5% for UMs, 27.9% for EMs, 38.9% for NMs, 26.8% for IMs and 1.8% for PMs, indicating alternative drug treatment for nearly 68% of subjects. In general, there is a large difference in CYP2C19 metabolism phenotypes among races. Warfarin, the most commonly used oral anticoagulant worldwide, is prescribed for the treatment and prevention of thromboembolic disorders. Warfarin dosing is notoriously challenging due to its narrow therapeutic index and wide interindividual variability in dose requirements. Warfarin dose variability is affected by common CYP2C9 , VKORC1 and CYP4F2 genetic variants. CYP2C9 *1 leads to the “normal metabolizer” phenotype, CYP2C9 *2 and CYP2C9 *3 are the two most common decreased‐function alleles, and CYP2C9 allele frequencies differ between racial/ethnic groups. , In this study, almost all subjects (99.33%) taking warfarin needed a decreased dose. ChinaMAP analysis of CYP4F2 , VKORC1 and CYP2C9 indicated that almost all Chinese individuals should use a reduced dose of warfarin, while Gregory's study reported a need for a decreased dose for 48.8% of subjects. Another study also demonstrated that race influences warfarin dose changes associated with genetic factors and recommended that warfarin dosing algorithms be stratified by race. The medication recommendations for clopidogrel and warfarin differ between individuals of Chinese and European ancestry, which emphasizes that race influences dose changes associated with genetic factors. The ratio of high‐risk genotypes also differed between provinces in China. The high‐risk ratio of rasburicase in GUANGXI was much higher than that nationwide (RR = 23.4, 95%CI:8.83–52.85, p < 0.001), which was similarly found in GUANGDONG (RR = 13.17, 95%CI:4.06–33.22, p < 0.001). Rasburicase is used as prophylaxis and treatment for hyperuricemia during chemotherapy in adults and children with lymphoma, leukaemia, and solid tumours. Rasburicase is contraindicated for G6PD‐deficient patients due to the risk of acute hemolytic anaemia and possibly methemoglobinemia, which can be fatal. CPIC guideline for rasburicase therapy stated that clinical units treating tumour lysis syndrome should access G6PD status preemptively. In this study, we observed a much higher high‐risk ratio of rasburicase in GUANGXI and GUANGDONG than that in other provinces. G6PD deficiency is caused by pathogenic variants of the G6PD gene. A Chinese national newborn screening for G6PD deficiency showed the prevalence of G6PD deficiency in GUANGXI and GUANGDONG was higher than in other provinces, which indicated a higher frequency of G6PD gene variants in GUANGXI and GUANGDONG. The FDA‐approved drug label states that individuals at higher risk for G6PD deficiency should be screened before starting rasburicase therapy. Thus, preemptive PGx testing in GUANGXI and GUANGDONG could not miss the G6PD results. In addition, the high‐risk ratio of desipramine, paroxetine, and codeine in HENAN was higher than that in other provinces ( p < 0.001), and the metabolism of these three drugs is affected by CYP2D6 polymorphisms. , For escitalopram, sertraline, and citalopram, the high‐risk ratio in SICHUAN was much lower than that in other provinces ( p = 0.003), and the metabolism of these three drugs is affected by polymorphisms in CYP2C19 . In general, we should be attentive to some specific genes when conducting preemptive PGx testing in different geographical regions of China. In order to carry out a comprehensive screening of important PGx genes at affordable cost in large‐scale populations of China, a low‐cost and high‐throughput PGx panel was needed. In this study, we used a multiplex PCR method to achieve both amplification of target regions and the construction of sequencing library in one PCR reaction. In this way, both the cost and the time of sequencing library construction could be saved. The PGx panel was designed to cover hotspot variants of 52 PGx genes and just required about 0.5 million raw reads per sample on average, so we could run at least 120 samples per chip on the semiconductor sequencing platform. The sequencing cost per sample was as low as a few US dollars. One sequencer could deal with at least 360 samples per day. With this PGx panel, we have completed the detection of 22,918 samples efficiently and cost‐effectively in this study. However, one limitation of this study was that we did not detect copy number variations (CNVs). For most PGx genes, CNVs were uncommon, except CYP2D6 . CNVs of functional alleles (mainly *1xN and *2xN) will lead to UM in CYP2D6 . In our study, the copy number of a functional CYP2D6 allele (mainly *1 and *2) would be counted as one whether multiple copies of the gene exist or not, and therefore we predict these phenotypes mainly as NM and rarely as IM. For example, a UM sample with a genotype of *1/*1x2 will be predicted as NM with *1/*1. The total frequency of predicted NM and IM was 99.6%, slightly greater than the previously reported 99%, which could be a result of UM misinterpretation. Besides, for detection of PM of CYP2D6 , we have also performed a low‐cost gap‐PCR assay for the full gene deletion of *5 allele with an observed frequency of 6.61%. CYP2D6 is very complex, of which more than 130‐star alleles were reported. Seven common star alleles of CYP2D6 were determined in this study and were expected to account for more than 90% of Chinese populations as reported in a previous study. By increasing target regions in our panel, it could improve the accuracy of CYP2D6 genotyping in the future study. To predict drug response and further to make safer and more effective therapeutic recommendations, PGx is gradually shifting from the reactive testing of a single gene to the preemptive testing of multiple genes. NGS has a high accuracy and cost‐effectiveness to detect common variants. NGS is widely used in clinical due to its detection performance and economic cost. PGx tested by NGS may be widely promoted in China in the near future, which could find more evidence about the interaction between drugs and genes to benefit patients. In summary, we demonstrate that 99.97% of the study population carried at least one actionable PGx variant, suggesting a high prevalence of actionable variants in the general population in China. Hence, preemptive PGx genotyping may benefit most individuals, with particular value for those taking multiple medications. Additionally, comparison with researches in other populations indicates that medication recommendations vary in different racial/ethnic groups. Furthermore, the diversity we observed among 20 provinces suggests that preemptive PGx screening in different geographical regions in China may need to pay more attention to specific genes. These results emphasize the importance of preemptive PGx testing and provide essential evidence for promoting clinical implementation in China. Q.‐f. H., T.‐f L., L.‐y. Y., and M. H. conceived and designed the experiments. Q.‐f. H., T. Y., T.‐f. L., W. L., J.‐x. W., Y.C., X.‐k. Y., and K.‐c. S. performed the data analysis. Q.‐f. H., Y.‐w. L., H.‐f. L., T. Y., Q. L., K.‐s. H., L.‐f.J., X.‐y.H., Y.‐r. L., and L.‐y. Y. collected clinical data and performed experimental verification. Q.‐f. H., Y.‐w. L., J.‐x. W., W. L., T. Y., T.‐f. L., L.‐y. Y., and M. H. drafted and revised the manuscript. All authors provided important feedback on the analysis of the results and the revision of the article. This work was supported by the National Key Research and Development Program (No. 2017YFC0909303). The authors declare no conflicts of interest. Figure S1 Click here for additional data file. Tables S1–S9 Click here for additional data file.
|
Endodontic retreatment decision‐making: The influence of the framing effect
|
e4f6e707-6215-4f74-b304-cdacbbf0b4a2
|
10098276
|
Dental[mh]
|
INTRODUCTION Epidemiological studies have reported a prevalence of periapical radiolucencies in root‐filled teeth of between 12% and 72% (Kielbassa et al., ; Pak et al., ; Silnovic et al., ). Although diagnosis is not always straightforward, most cases are caused by an inflammatory lesion, apical periodontitis (AP). As AP in root‐filled teeth tends to remain more or less asymptomatic over many years, its first diagnosis is often made during a routine examination or as an incidental finding. According to the prevailing academic paradigm, a lesion diagnosed as AP in association with a root‐filled tooth is defined as an “endodontic failure,” and thus implies a clinical decision and action (Reit & Kvist, ; Strindberg, ). For this reason, endodontology scholars since at least the 1980s have been puzzled and annoyed by recurrently shown variation in clinical decisions about root‐filled teeth with AP, and particularly by practitioners’ reluctance to suggest and institute an endodontic retreatment procedure (Kvist et al., ; Reit et al., ; Taha et al., ). The many complex factors involved in the clinical decision‐making process have made it difficult to present a coherent model for explaining and understanding these variations (Kvist, ). However, there is good reason to assume that these variations can be attributed to two main categories of uncertainty: facts and values (Kvist & Reit, ). With regard to facts, solid scientific evidence is lacking on questions regarding both the diagnosis of a “failure” and the outcome of retreatment or a no‐intervention alternative (Frisk & Kvist, ). With regard to values, the variations in question may stem from different perceptions of disease, educational contexts, and values concerning illness and health (Kvist & Reit, ). Due to the great uncertainties, authors have emphasized the importance of the patient's right to autonomy and hence participation in the process involving decisions on retreatment (Azarpazhooh et al., ; Kvist & Hofmann, ; Kvist & Reit, ). Autonomy, or self‐determination, means that an individual has the right to decide on matters regarding his or her own body, mind, and life. The right to autonomy has a strong foundation in various ethical theories (Beauchamp & Childress, ). As the concept of autonomy also includes an individual's right to decide on his or her healthcare, any two‐way communication process involving information sharing and decision‐making should always precede a medical or dental decision on treatment or refraining from it (World Health Organization, ). For a patient to be able to make an autonomous decision, the dentist must therefore provide the patient with all relevant facts: the findings, the etiology of the disorder, the various options available for dealing with it, and the risks, costs, probable outcome, and long‐term prognosis (Kvist, ; Kvist & Hofmann, ). Like in many other clinical situations, in the case of a root‐filled tooth with AP, many of the facts that are required for the provision of valid evidence‐based information are missing or highly uncertain (Frisk & Kvist, ; Kvist & Hofmann, ). There is also the matter of how the available information should be presented. A choice between options can be framed in different ways. The framing effect, which was first recognized by Tversky and Kahnemann in 1981 (Tversky & Kahneman, ), is described as a cognitive bias whereby people decide on options on the basis of whether they—the options—are presented with positive or negative connotations. Although this cognitive bias effect has been explored in several medical decision‐making contexts (Gong et al., ), its relevance has been met with very little interest among clinical researchers in dentistry (Arora, ). But in one study by Foster & Harrison ( ), first‐year dental students simulated the role of patients in an experiment on the effect of framing in an endodontic decision‐making situation (Foster & Harrison, ). In a scenario involving a symptomatic tooth with failed endodontic therapy, they were asked to select one of two treatment options: nonsurgical endodontic retreatment, or extraction and implant placement. Their selection of treatment was significantly influenced by biased presentations. The present study was set up to explore the possible influence of a framing effect when an individual is asked to choose between no intervention and retreatment of a root‐filled tooth presenting with asymptomatic AP.
MATERIAL AND METHODS 2.1 Participants A total of 248 individuals (74 men and 173 women) who studied or worked within the area of dentistry were recruited on a voluntary basis. This number included 121 dental students, all of whom were studying at the Institute of Odontology at Sahlgrenska Academy, University of Gothenburg, Sweden. They had reached various training levels, with 49 in the first year, 29 in the second, and 43 in the third. Seventy‐four participants were drawn from the staff at the Institute of Odontology: 32 dentists, 7 dental hygienists, 32 dental nurses, and 2 people in administration and reception. The 53 remaining participants consisted of general dentists, both private and public employees, who were attending a course in endodontics at the Gothenburg Dental Society. 2.2 Questionnaire Two variants of a questionnaire were created, each designed to cause a respondent to answer from his or her perspective as a potential patient. The description of the clinical decision‐making situation was simple and patient‐oriented. Although the clinical situation and information were identical in both questionnaires, the two alternative treatment options were systematically framed in two different ways. The clinical situation was described as follows: Imagine that 5 years ago you were involved in a bicycle accident in which you injured your upper left central tooth. As a result, the tooth needed root canal treatment. It was opened up, cleaned of bacteria, and filled with a rubber‐like material. Finally, it was sealed with a plastic filling. Since then, you have not experienced any problems with this tooth . When you come to your dentist for your annual routine checkup today, the dentist decides to take a radiograph of the tooth. This shows a lesion in the bone around the tip of the tooth . On the radiograph, you can see that the bone around the root tip of the tooth is a little darker. This indicates that there are bacteria left inside the root canal that cause inflammation, which in turn becomes visible on the radiograph. The root filling appears to be short and incomplete . Your dentist presents two options. You have free dental care, so any treatment you choose is free of charge . In the first variant of the questionnaire, the intention was to frame the options in favor of refraining from retreatment now and of waiting and seeing (FW). Option A (Wait) . Refrain from retreatment now and wait and see. The chance that the tooth will be asymptomatic for the rest of your life is approximately 90%. The chance that any remaining infection will have no negative effect on your overall health is more than 99% . Option B (Retreat) . Retreatment, which involves remaking the root‐canal treatment so that the root filling becomes dense and is the correct length. Despite retreatment, the risk that the inflammation will not heal is approximately 25% . In the second variant of the questionnaire, the intention was to frame the options in favor of retreatment (FR). Option A (Wait) . Refrain from retreatment now and wait and see. The risk that the tooth later will become symptomatic in the form of pain and/or swelling that requires treatment is approximately 10%. The risk that any remaining infection has a negative effect on your overall health is less than 1% . Option B (Retreat) . Retreatment, which involves remaking the root‐canal treatment so that the root filling becomes dense and is the correct length. The chance that the inflammation will heal is approximately 75% . At the end of the questionnaire, respondents were requested to register their gender, age, and occupation, or, if they were dental students, to state their year of study. The two variants of the questionnaire are presented in Figure . Figure 1 (a and b) The two different versions of the questionnaire (a = favoring wait and see [FW] and b = favoring retreatment [FR]) distributed to the participants in the study. 2.3 Distribution and procedures The same number of copies (150) of questionnaire variants FW and FR were printed and sorted into a stack in which FW consistently alternated with FR. The questionnaires were distributed on five different occasions. First, two of the authors (Agnesa Smakiqi and Daniela Henelius) gave a short introduction in which the participants were told, that the questionnaire was supposed to provide the basis for a Master's thesis on clinical decision‐making in root‐filled teeth. Participants were also informed that reading and answering would require no more than 10 min, that all answers would be anonymous, and that as participation was completely voluntary, the questionnaire could also be returned unanswered. Participants were asked not to communicate with each other when completing the questionnaire. The real purpose of the study was concealed from the participants, as was the fact that two different variants of the questionnaire would be distributed. The stack of questionnaires was distributed, with each participant receiving only one questionnaire. The questionnaire was distributed to the students in the lecture hall during a selected lecture and to the staff at the Folktandvården Education Clinic for Dentistry during a staff meeting. These questionnaires were all collected immediately. Distribution of the questionnaire among dentists took place during an evening course organized by Gothenburg Dental Association (GTS). These participants were required to submit the questionnaire either immediately, or by post in a prestamped letter to the Department of Endodontology. The information from each questionnaire was then transferred to an Excel data sheet (Microsoft Corp). 2.4 Ethical considerations This study was originally a part of a master's thesis at the Institute of Odontology, Sahlgrenska Academy, University of Gothenburg, Sweden. No patient or patient data was involved in the study, except for an anonymous radiograph, of which the patient had given consent to be used for the purpose. All responders to the questionnaire were informed that answering would be anonymous, and that participation was completely voluntary, the questionnaire could be returned unanswered without registration. 2.5 Statistical methods Before the statistical analysis, participants were divided by gender and into three groups on the basis of age: 18–25, 26–49, and 50+. We also categorized the respondents as follows: students and of their training level (1st, 2nd, or 3rd year), staff at dental school or course for dentists. For comparison between the different groups, Fisher's exact test was used with a two‐sided 5% significance level found at https://www.graphpad.com .
Participants A total of 248 individuals (74 men and 173 women) who studied or worked within the area of dentistry were recruited on a voluntary basis. This number included 121 dental students, all of whom were studying at the Institute of Odontology at Sahlgrenska Academy, University of Gothenburg, Sweden. They had reached various training levels, with 49 in the first year, 29 in the second, and 43 in the third. Seventy‐four participants were drawn from the staff at the Institute of Odontology: 32 dentists, 7 dental hygienists, 32 dental nurses, and 2 people in administration and reception. The 53 remaining participants consisted of general dentists, both private and public employees, who were attending a course in endodontics at the Gothenburg Dental Society.
Questionnaire Two variants of a questionnaire were created, each designed to cause a respondent to answer from his or her perspective as a potential patient. The description of the clinical decision‐making situation was simple and patient‐oriented. Although the clinical situation and information were identical in both questionnaires, the two alternative treatment options were systematically framed in two different ways. The clinical situation was described as follows: Imagine that 5 years ago you were involved in a bicycle accident in which you injured your upper left central tooth. As a result, the tooth needed root canal treatment. It was opened up, cleaned of bacteria, and filled with a rubber‐like material. Finally, it was sealed with a plastic filling. Since then, you have not experienced any problems with this tooth . When you come to your dentist for your annual routine checkup today, the dentist decides to take a radiograph of the tooth. This shows a lesion in the bone around the tip of the tooth . On the radiograph, you can see that the bone around the root tip of the tooth is a little darker. This indicates that there are bacteria left inside the root canal that cause inflammation, which in turn becomes visible on the radiograph. The root filling appears to be short and incomplete . Your dentist presents two options. You have free dental care, so any treatment you choose is free of charge . In the first variant of the questionnaire, the intention was to frame the options in favor of refraining from retreatment now and of waiting and seeing (FW). Option A (Wait) . Refrain from retreatment now and wait and see. The chance that the tooth will be asymptomatic for the rest of your life is approximately 90%. The chance that any remaining infection will have no negative effect on your overall health is more than 99% . Option B (Retreat) . Retreatment, which involves remaking the root‐canal treatment so that the root filling becomes dense and is the correct length. Despite retreatment, the risk that the inflammation will not heal is approximately 25% . In the second variant of the questionnaire, the intention was to frame the options in favor of retreatment (FR). Option A (Wait) . Refrain from retreatment now and wait and see. The risk that the tooth later will become symptomatic in the form of pain and/or swelling that requires treatment is approximately 10%. The risk that any remaining infection has a negative effect on your overall health is less than 1% . Option B (Retreat) . Retreatment, which involves remaking the root‐canal treatment so that the root filling becomes dense and is the correct length. The chance that the inflammation will heal is approximately 75% . At the end of the questionnaire, respondents were requested to register their gender, age, and occupation, or, if they were dental students, to state their year of study. The two variants of the questionnaire are presented in Figure . Figure 1 (a and b) The two different versions of the questionnaire (a = favoring wait and see [FW] and b = favoring retreatment [FR]) distributed to the participants in the study.
Distribution and procedures The same number of copies (150) of questionnaire variants FW and FR were printed and sorted into a stack in which FW consistently alternated with FR. The questionnaires were distributed on five different occasions. First, two of the authors (Agnesa Smakiqi and Daniela Henelius) gave a short introduction in which the participants were told, that the questionnaire was supposed to provide the basis for a Master's thesis on clinical decision‐making in root‐filled teeth. Participants were also informed that reading and answering would require no more than 10 min, that all answers would be anonymous, and that as participation was completely voluntary, the questionnaire could also be returned unanswered. Participants were asked not to communicate with each other when completing the questionnaire. The real purpose of the study was concealed from the participants, as was the fact that two different variants of the questionnaire would be distributed. The stack of questionnaires was distributed, with each participant receiving only one questionnaire. The questionnaire was distributed to the students in the lecture hall during a selected lecture and to the staff at the Folktandvården Education Clinic for Dentistry during a staff meeting. These questionnaires were all collected immediately. Distribution of the questionnaire among dentists took place during an evening course organized by Gothenburg Dental Association (GTS). These participants were required to submit the questionnaire either immediately, or by post in a prestamped letter to the Department of Endodontology. The information from each questionnaire was then transferred to an Excel data sheet (Microsoft Corp).
Ethical considerations This study was originally a part of a master's thesis at the Institute of Odontology, Sahlgrenska Academy, University of Gothenburg, Sweden. No patient or patient data was involved in the study, except for an anonymous radiograph, of which the patient had given consent to be used for the purpose. All responders to the questionnaire were informed that answering would be anonymous, and that participation was completely voluntary, the questionnaire could be returned unanswered without registration.
Statistical methods Before the statistical analysis, participants were divided by gender and into three groups on the basis of age: 18–25, 26–49, and 50+. We also categorized the respondents as follows: students and of their training level (1st, 2nd, or 3rd year), staff at dental school or course for dentists. For comparison between the different groups, Fisher's exact test was used with a two‐sided 5% significance level found at https://www.graphpad.com .
RESULTS A total of 248 individuals participated in our study, 141 of whom (56.9%) chose retreatment and 107 of whom (43.1%) chose the wait‐and‐see option. One hundred and twenty‐five participants (50.4%) had received the questionnaire variant framed in favor of refraining from treatment (wait‐and‐see) (FW), and 123 participants (49.6%) had received the variant framed in favor of retreatment (FR). Whereas 69 (55.2%) of the participants who had received questionnaire FW chose the option to refrain and wait, 56 (44.8%) chose retreatment. In contrast, whereas 38 (30.9%) of the participants who had received questionnaire FR chose to refrain and wait, 85 (69.1%) chose retreatment. This difference was statistically significant ( p = .0002) (Table .) Seventy‐four (30%) of the participants were men and 173 (70%) were women. When the possible framing effect was analyzed on the basis of gender, a statistically significant framing effect ( p = .0004) was found among women. In men, a framing effect was registered numerically, but the difference was not statistically significant ( p = .20) (Table ). One hundred and eight participants (45%) were in the 18–25 age group; 45 (18%) in the 26–49 age group; and 89 (37%) in the 50+ age group. A statistically significant framing effect ( p = .020) was detected in the 18–25 age group. In the 26–49 and 50+ age groups, the framing effect was not statistically significant (Table ). The results were also analyzed on the basis both of occupational category and of the occasion on which the questionnaire was distributed. A framing effect was observed regardless of the category but reached statistical significance only among 3rd year students ( p = .016) (Table ).
DISCUSSION The results of this study show that a framing effect can be expected to play a role in endodontic retreatment decision‐making. The pooled data from all respondents showed a statistically significant effect. A similar effect was seen when the respondents were divided into subgroups categorized by gender, age, occupation, and occasion. However, the effect was not statistically significant in all analyses. Even if it cannot be ruled out that in some groups of respondents, no framing effect is present, the probable reason is that some of the subgroups were not large enough, thereby causing a statistical Type II error. Future research projects could aim to evaluate whether the framing effect is greater, less, or not at all demonstrable in different groups of potential decision‐makers depending on gender, age, level of education, or other group‐defining characteristics. The explanation for the framing effect can be found within the framework of the prospect theory (Kahneman & Tversky, ). This theory, which essentially concerns economic behavior, challenged the idea of rationality among decision‐makers as it had explicitly been formulated in the expected utility theory (Von Neumann & Morgenstern, ). The prospect theory, which was based on results from controlled studies, describes how individuals assess their loss and gain perspectives in an asymmetric manner. The theory assumes that there are two different phases of decision‐making. In the first phase, the alternatives stated are automatically evaluated, a process that involves analysis and simplification of the information they contain. In the second phase, the decision‐maker considers the alternatives and chooses the one he or she judges to be most beneficial (Kahneman & Tversky, ). When a choice is being made between two options, an alternative that is described in its entirety positively seems to be preferable to one that is in itself described negatively, even though both alternative descriptions state exactly the same factual information. In a classic study by McNeil et al. ( ), it was investigated how variations in the way in which information was presented influenced the choices made by ambulatory patients, graduate students, and physicians when deciding between alternative therapies—radiation or surgery—in cases of lung cancer. Different groups of respondents received input data that differed according to whether the treatment outcomes were framed in terms of the probability of living or the probability of dying. In all three groups of responders, the attractiveness of surgery relative to radiation therapy was greater when the problem was framed in terms of the probability of living rather than in terms of the probability of dying. In our study involving the root‐filled tooth with AP, the factually identical information on the “wait and see” and “retreatment” options were framed either by using the word “chance” or the word “risk” to indicate the probability (likelihood) of outcomes. However, the connotations of the words “risk” and “chance” are essentially different (Li et al., ; Morizot, ). While chance has a positive connotation (the likelihood of something good happening), risk has a negative connotation (the likelihood of something bad happening). By using a more neutral word such as “probability” or “likelihood,” a clinician presenting prognostic assumptions about a clinical option could possibly reduce the framing effect. To enhance the framing effect in our experiment, we combined the word “chance” with “healing” (a positively laden expression), and “risk” with “nonhealing” (a negatively laden expression). Similarly, a statement of a 90% likelihood of success highlights the attractive outcome of a procedure, whereas a 10% likelihood of failure tends to highlight the unattractive outcome. Partly because of the strongly value‐laden component of the “success” and “failure” classifications, various authors have suggested alternative systems and terms to evaluate and classify the outcome of root canal treatment (Friedman & Mor, ; Messer & Yu, ; Wu et al., ). Language evidently plays an important role in many aspects of medicine and healthcare and may be used as a powerful tool in clinical decision‐making situations (Srivastava, ). Thus, the clinician presenting the information to the patient may, consciously or unconsciously, influence the patient's choice in favor of a particular option. In the absence of strong scientific evidence for the benefits or harms of a particular choice, it may also be assumed that the clinician's framing of the options and how presenting them to the patient, is influenced by different heuristic biases concerning probability (Hicks & Kluemper, ; Reit et al., ; Tversky & Kahneman, ). Referred to as “availability,” defines the phenomenon whereby people assess the frequency of a class or the probability of an event on the basis of the ease with which instances or occurrences can be brought to mind (Tversky & Kahneman, ). For example, the influence of availability may be expected when the retreatment and wait‐and‐see options for new patients with an asymptomatic AP are framed by a dentist who recently met a patient with a flare‐up in a root‐filled tooth. In particular, this is to be expected if, in any aspect, the present case resembles the recent experience of a patient in severe pain, as explained by the principle of “representativeness” (Hicks & Kluemper, ; Tversky & Kahneman, ). Representativeness is defined as a heuristic bias that occurs when the similarity of objects or events confuses people's thinking regarding the probability of an outcome. Quite apart from the heuristic reasoning about the probability that may influence dentists’ expectations and preferences, their clinical choices may even be affected by their prejudices about their patients (Patel et al., ). An interesting finding in our study was that even though a framing effect was evident at the group level, both options, regardless of the variant of the questionnaire, were chosen rather frequently. This finding indicates that other factors than how the information was framed are important for the respondent's choice. This is consistent with previous studies on the subject of endodontic retreatment decision‐making (Kvist, ; Reit & Kvist, ). The praxis concept theory as regards endodontic retreatment decision‐making was proposed by Kvist et al. ( ). At its core is the hypothesis that interindividual variation in decision‐making on endodontic retreatment can largely be explained by variation in the values of the individual decision‐makers. The observations in this study, which involved respondents with varied experiences and backgrounds, do not falsify this theory. To explain the origin of the various values involved in the endodontic retreatment decision‐making process, it is assumed that, somehow, an individual's values are at least partly a merged mental disposition developed by experience from different environments (Kvist, ; Kvist & Reit, ; Kvist et al., ; Taha et al., ). For example, it has been shown that, in endodontic retreatment decision‐making situations, endodontists systematically make decisions differently than students, general dental practitioners, or specialists in other disciplines (Bigras et al., ). Any acknowledgment of the framing effect challenges the concept of patient autonomy and informed consent. By consciously or unconsciously choosing value‐laden words and framing different treatment options, a therapist will influence their patient's choice, intentionally or otherwise. Only if the dentist is aware of this problem, and consciously attempts to provide information in ways that are as neutral as possible will he or she be able to reduce this effect. On the other hand, the power of the framing effect in clinical decision‐making may also be deliberately used in situations to influence patients to make the “right” decision—“right” in the sense that there are good reasons and good evidence to believe that a certain decision is in the patient's best interests. Sherman et al. ( ) and Patel et al. ( ) showed how the framing effect was used to encourage the use of dental floss: those who received information in the form of a profit‐framed video were more likely to use dental floss according to the recommendations for a period of 6 months than those who saw a loss‐framed video. This study makes no claim to fully chart the framing effect of clinical decision‐making in connection with root‐filled teeth. As the respondents were not selected on the basis of belonging to nonprofessional groups and as all had some kind of affiliation to dentistry, its external validity can be questioned. It may also be argued that the factual information—that is, the percentages regarding healing, the likelihood of becoming symptomatic, and the influence of any remaining infection and inflammation on systemic health—is not based on the best available evidence. However, it was not our purpose to systematically review the best current evidence on the matter. Instead, our purpose was to apply the phenomenon of framing to a well‐known clinical decision problem within endodontics, to provide some empirical support, and to discuss various questions that arose, and also their implications.
CONCLUSION A framing effect is likely to play an essential role in endodontic retreatment decision‐making of root‐filled teeth with asymptomatic AP.
Thomas Kvist, Daniela Henelius, and Agnesa Smakiqi all made substantial contributions to the conception and design of the study. Daniela Henelius and Agnesa Smakiqi were responsible and involved in data collection. Thomas Kvist, Daniela Henelius, and Agnesa Smakiqi were all involved in data interpretation, statistical analyses, drafting, and critically revising the manuscript. All authors have given final approval for the version to be published.
The authors declare no conflict of interest.
|
Malocclusions and oral dysfunctions: A comprehensive epidemiological study on 359 schoolchildren in France
|
1b6834f0-1936-4633-9102-08349464790f
|
10098281
|
Dental[mh]
|
INTRODUCTION Many authors have shown interest in the impact of dysfunctions on the morphogenesis arch (Delaire, ; Grabowski et al., ; Phillippe, ; Talmant & Deniaud, ). In the sixties, Melvin Moss put forward his functional matrix theory, stating “a function to each organ” or better, “each organ to its own function” (Moss & Salentijn, ). Since then, his ideas inspired ardent supporters and fervent opponents. Indeed, beside this functional current, appeared the genetic current (Tweed, ), and the synthetic one (Björk, ; Delaire et al., ), which seems to be a compromise between the two latters (Saccomanno et al., ). Thus, the Schwartz, for muscular types (Schwartz, ) and Sassouni classifications, for skeletal types, (Sassouni, ) based on muscle physiology, can really be considered as applications of Moss's theory. Current data allow us to say that most cranial facial functions are instrumental in developing the face and in establishing occlusion: a balance between the different muscle groups will allow a more harmonious development (Talmant & Deniaud, ). Any imbalance and dysfunction will have an impact of morphogenesis and could lead to osseous deformations and anomalies of tooth position, form, and function being closely linked (Ovsenik et al., ). Though, the prevalence of oral respiration varies according to the different authors from 15% to 55% (Abreu et al., ; De Menezes et al., ; Huber & Reynolds, ; Leal et al., ). The prevalence of primary deglutition also varies from one author to another, reaching 36% for Garliner ( ). However, a very small number of national or international studies have been carried out on this subject. Indeed, the literature review reveals a significant number of “authors’ views” (Delaire, ; Grabowski et al., ; Phillippe, ; Talmant & Deniaud, ) without any scientific evidence backed up by a strictly carried out study coming to light; indeed, only one study was identified, carried out in 2006 by the Epidemiology department of the Faculty of Dentistry in Paris (Souames et al., ), whose objective was to study orthodontic treatment need in French schools in the Val d'Oise department. It would therefore be interesting to study oral dysfunctions, especially primary swallowing and oral respiration and their consequences on dysmorphy. Thus, the objective of this epidemiological survey, based on 11 year old patients, was to evaluate, in a screening situation, the relationship between malocclusion (simple extra‐ and intraoral examination) and oral dysfunctions through the low resting position of the tongue, swallowing and respiration and to highlight the potential impact of these risk factors on the development of malocclusions, as well as the need for orthodontic treatment according to the criteria defined by the French National Authority for Health (HAS) (HAS/ANAES et al., ). The second objective was to relate all these variables by uni‐ and multivariate analysis to deduce the risk factors associated with malocclusions.
MATERIAL AND METHODS 2.1 Ethical consideration and registration This study was conducted in accordance with the Declaration of Helsinki. An ethical approval was sought before the study from the internal ethics committee of the Nice University, and an opinion on the project was also sought from the Institutional Review Board of the Nice University following its creation in 2021. It has also issued a favorable opinion on the project (01/2022, number 2022‐008). An authorization to conduct this study was obtained from school authorities, both the senator mayor of the city and the rector of the Academy of Nice approved the protocol, and an agreement was signed between the city of Cagnes‐sur‐mer and the University. The children's parents were requested to sign an informed consent form, after being informed of the purpose and benefits of the study, before the beginning of the study. This study is registered in the database for registration of clinical studies ( ClinicalTrial.gov , identification number NCT04869839). 2.2 Study design This manuscript was written following the CONSORT (Consolidated Standards of Reporting Trials) guidelines. The study was designed as an exhaustive cross‐sectional study in the sixth‐grade classes of all the elementary schools of the city of Cagnes‐sur‐mer (town of about 50,000 habitants located in the department of Alpes‐Maritimes, France). 2.3 Participants, eligibility and setting A total of 359 children were therefore enrolled in the study among the 416 children registered in six‐grade classes in Cagnes‐sur‐mer, that is, 86% of children, between April and May 2017; Very few parents refused to give their agreement (2%), and the remaining 12% consisted in the children who were absent on the day of the examination. 2.4 Inclusion/exclusion criteria All children whose parents gave their informed consent were included. No exclusion criteria were to be applied. 2.5 Dental examinations Children were welcomed in small groups to learn about oral health care, traumas and nutrition. Children were invited to sit, each in turn, on a chair in a separate room of their classroom to respect the children's privacy and the confidentiality of the collected data. Clinical examinations were carried out using disposable dental kits (probe and mirror) under natural light or artificial light when natural light alone was deemed insufficient, by using a headlamp. Even though the screening was not performed under the same conditions as a dental chairside examination, a ruler was used, and so was a headlamp and a portable go‐kart with an associated compressor and air/water syringe. All orthodontic examinations were carried out by an orthodontist (ED) and who is an author of the article. The self‐calibration was done beforehand by training with photographs. The other oral examinations were carried out by a general practicionner (LB), who is also an author of the article. Eight students in their final year of study were also present to assist and perform preventive actions with the children. Each parent received a letter notifying them of their child's oral health status and whether or not an appointment with a dentist or an orthodontist was necessary. 2.6 Data collection The following data were collected: – Untreated dental caries. – The Silness and Loë's ( ) plaque index (0: no plaque, 1: plaque not visible to the naked eye but removable with the probe; 2: plaque visible to the naked eye; 3: abundant plaque visible to the naked eye in the sulcus and on the marginal gingiva) (Abreu et al., ). – A extraoral examination: sub‐nasal profile, the nasolabial angle and the labiomental fold; Cephalometric standards for the skin profile were established by Tweed, Steiner, Burstone, Ricketts, Holdaway and Merrifield in the 1960s. The sub‐nasal profile is assessed in relation to two lines perpendicular to the Frankfurt plane: Izard's plane and Simon's plane. If the chin point is located: – In front of Izard's plane: the profile is convexed. – Behind Simon's plane: the profile is concave. – Between the two: the profile is normal. The nasolabial angle is measured between the columella of the nose and the upper lip. The angle should range between 85 and 105 degrees and is influenced by the position and angle of the upper incisors and the anatomy of the nasal columella. The labiomental fold divides the chin into an upper and lower half. – A intraoral examination: Angle's classification of molar and canine relationships, increased overjet (overjet>3 mm), presence of a crossbite (yes/no), shifted midlines (yes/no), presence of deep bite (overbite>3 mm) and presence of an open bite (overbite<0). – A functional examination: type of respiration, type of swallow and position of the tongue at rest. The modes of breathing and swallowing were registered during clinical examination. The child was observed in a relaxed position. We evaluated respiration by first looking for adenoid facies characteristics: an elongated face, the presence of under‐eye shadows, a pinched nose, parched lips, half‐opened mouth and opened gonial angle. It was noted whether he or she had competent lip closure. Then, as recommended by Villa et al., the Glatzel test (with a dental mirror) and the Rosenthal test were carried out. Every child was asked to breathe with his/her mouth closed. Within 15 breaths, it was stated whether the child needed to open his mouth or if he was out of breath at the end of the test. Tongue thrust swallowing data was obtained at the time of the clinical examination. To examine the presence of tongue thrusting, the patients were asked to swallow their saliva three times during the same visit. There should be no facial contraction or lingual interposition, and the arches should remain in occlusion. When in doubt, another swallow was requested until the orthodontist was satisfied with their judgment. Tongue thrust was defined as a protrusion of the tongue between the upper and lower incisors or the cuspids during swallowing. The position of the tongue at rest was estimated by opening the cheek and looking underneath the tongue. We used the criteria of Van Dyck et al. The “physiological” (normal) resting position of the tongue was defined as the tongue in contact with the palate extending to the palatal aspect of the alveolar ridge (and not between the anterior and/or posterior teeth or directed towards the lower anterior teeth). In addition, for each child included in this survey, an evaluation of the need for orthodontic treatment according to the criteria retained by the HAS was performed (HAS/ANAES et al., ). The HAS allows to give some orientations in the framework of a screening, by defining semiological elements which, during a screening, will direct towards a specialized consultation. Any dysfunction can be considered as a warning sign and should lead to a morphological examination. Respiration, swallowing, phonation, mastication, suction or mandibular kinematics should be monitored. The morphological examination includes an exobuccal examination looking for asymmetries, vertical disproportions of the face, alterations of the profile, permanent resting labial inocclusions, alterations of the smile, and an endobuccal examination which observes discordances of the maxillary and mandibular arches, anomalies of the incisal relationships and disturbances of the alignment of the teeth. In such “screening” conditions in the schools, casts and X rays were not available. We were forced to limit ourselves to clinical examinations. 2.7 Sample size calculation and statistical analysis The previous study carried out in Paris revealed that 20% of the examined children needed and orthodontic treatment (De Menezes et al., ). Therefore, the minimum size of the sample to be obtained can be calculated thanks to the following formula: n = t 2 x p ( 1 − p ) e 2 . With t = standard normal variate (at 5% type I error ( p < .05) it is 1.96). p = expected proportion in population based on the prior study. e = absolute error or precision, set at 5%. Therefore, the number of children to be included must be of at least 244. Descriptive statistics were used to summarize data, then frequency tables and univariate analyses (cross‐tabulated) were carried out using the “presence of a malocclusion” as the explained variable. The chi‐squared test or Fisher's exact test were used for qualitative variables, whereas the Student t test was used for quantitative variables. A significance threshold of .05 was adopted. The multivariate analysis, using a binary logistic regression, was carried out by inserting all the 0.20 univariate significant variables into the model. The software used was SPSS 25.0.
Ethical consideration and registration This study was conducted in accordance with the Declaration of Helsinki. An ethical approval was sought before the study from the internal ethics committee of the Nice University, and an opinion on the project was also sought from the Institutional Review Board of the Nice University following its creation in 2021. It has also issued a favorable opinion on the project (01/2022, number 2022‐008). An authorization to conduct this study was obtained from school authorities, both the senator mayor of the city and the rector of the Academy of Nice approved the protocol, and an agreement was signed between the city of Cagnes‐sur‐mer and the University. The children's parents were requested to sign an informed consent form, after being informed of the purpose and benefits of the study, before the beginning of the study. This study is registered in the database for registration of clinical studies ( ClinicalTrial.gov , identification number NCT04869839).
Study design This manuscript was written following the CONSORT (Consolidated Standards of Reporting Trials) guidelines. The study was designed as an exhaustive cross‐sectional study in the sixth‐grade classes of all the elementary schools of the city of Cagnes‐sur‐mer (town of about 50,000 habitants located in the department of Alpes‐Maritimes, France).
Participants, eligibility and setting A total of 359 children were therefore enrolled in the study among the 416 children registered in six‐grade classes in Cagnes‐sur‐mer, that is, 86% of children, between April and May 2017; Very few parents refused to give their agreement (2%), and the remaining 12% consisted in the children who were absent on the day of the examination.
Inclusion/exclusion criteria All children whose parents gave their informed consent were included. No exclusion criteria were to be applied.
Dental examinations Children were welcomed in small groups to learn about oral health care, traumas and nutrition. Children were invited to sit, each in turn, on a chair in a separate room of their classroom to respect the children's privacy and the confidentiality of the collected data. Clinical examinations were carried out using disposable dental kits (probe and mirror) under natural light or artificial light when natural light alone was deemed insufficient, by using a headlamp. Even though the screening was not performed under the same conditions as a dental chairside examination, a ruler was used, and so was a headlamp and a portable go‐kart with an associated compressor and air/water syringe. All orthodontic examinations were carried out by an orthodontist (ED) and who is an author of the article. The self‐calibration was done beforehand by training with photographs. The other oral examinations were carried out by a general practicionner (LB), who is also an author of the article. Eight students in their final year of study were also present to assist and perform preventive actions with the children. Each parent received a letter notifying them of their child's oral health status and whether or not an appointment with a dentist or an orthodontist was necessary.
Data collection The following data were collected: – Untreated dental caries. – The Silness and Loë's ( ) plaque index (0: no plaque, 1: plaque not visible to the naked eye but removable with the probe; 2: plaque visible to the naked eye; 3: abundant plaque visible to the naked eye in the sulcus and on the marginal gingiva) (Abreu et al., ). – A extraoral examination: sub‐nasal profile, the nasolabial angle and the labiomental fold; Cephalometric standards for the skin profile were established by Tweed, Steiner, Burstone, Ricketts, Holdaway and Merrifield in the 1960s. The sub‐nasal profile is assessed in relation to two lines perpendicular to the Frankfurt plane: Izard's plane and Simon's plane. If the chin point is located: – In front of Izard's plane: the profile is convexed. – Behind Simon's plane: the profile is concave. – Between the two: the profile is normal. The nasolabial angle is measured between the columella of the nose and the upper lip. The angle should range between 85 and 105 degrees and is influenced by the position and angle of the upper incisors and the anatomy of the nasal columella. The labiomental fold divides the chin into an upper and lower half. – A intraoral examination: Angle's classification of molar and canine relationships, increased overjet (overjet>3 mm), presence of a crossbite (yes/no), shifted midlines (yes/no), presence of deep bite (overbite>3 mm) and presence of an open bite (overbite<0). – A functional examination: type of respiration, type of swallow and position of the tongue at rest. The modes of breathing and swallowing were registered during clinical examination. The child was observed in a relaxed position. We evaluated respiration by first looking for adenoid facies characteristics: an elongated face, the presence of under‐eye shadows, a pinched nose, parched lips, half‐opened mouth and opened gonial angle. It was noted whether he or she had competent lip closure. Then, as recommended by Villa et al., the Glatzel test (with a dental mirror) and the Rosenthal test were carried out. Every child was asked to breathe with his/her mouth closed. Within 15 breaths, it was stated whether the child needed to open his mouth or if he was out of breath at the end of the test. Tongue thrust swallowing data was obtained at the time of the clinical examination. To examine the presence of tongue thrusting, the patients were asked to swallow their saliva three times during the same visit. There should be no facial contraction or lingual interposition, and the arches should remain in occlusion. When in doubt, another swallow was requested until the orthodontist was satisfied with their judgment. Tongue thrust was defined as a protrusion of the tongue between the upper and lower incisors or the cuspids during swallowing. The position of the tongue at rest was estimated by opening the cheek and looking underneath the tongue. We used the criteria of Van Dyck et al. The “physiological” (normal) resting position of the tongue was defined as the tongue in contact with the palate extending to the palatal aspect of the alveolar ridge (and not between the anterior and/or posterior teeth or directed towards the lower anterior teeth). In addition, for each child included in this survey, an evaluation of the need for orthodontic treatment according to the criteria retained by the HAS was performed (HAS/ANAES et al., ). The HAS allows to give some orientations in the framework of a screening, by defining semiological elements which, during a screening, will direct towards a specialized consultation. Any dysfunction can be considered as a warning sign and should lead to a morphological examination. Respiration, swallowing, phonation, mastication, suction or mandibular kinematics should be monitored. The morphological examination includes an exobuccal examination looking for asymmetries, vertical disproportions of the face, alterations of the profile, permanent resting labial inocclusions, alterations of the smile, and an endobuccal examination which observes discordances of the maxillary and mandibular arches, anomalies of the incisal relationships and disturbances of the alignment of the teeth. In such “screening” conditions in the schools, casts and X rays were not available. We were forced to limit ourselves to clinical examinations.
Sample size calculation and statistical analysis The previous study carried out in Paris revealed that 20% of the examined children needed and orthodontic treatment (De Menezes et al., ). Therefore, the minimum size of the sample to be obtained can be calculated thanks to the following formula: n = t 2 x p ( 1 − p ) e 2 . With t = standard normal variate (at 5% type I error ( p < .05) it is 1.96). p = expected proportion in population based on the prior study. e = absolute error or precision, set at 5%. Therefore, the number of children to be included must be of at least 244. Descriptive statistics were used to summarize data, then frequency tables and univariate analyses (cross‐tabulated) were carried out using the “presence of a malocclusion” as the explained variable. The chi‐squared test or Fisher's exact test were used for qualitative variables, whereas the Student t test was used for quantitative variables. A significance threshold of .05 was adopted. The multivariate analysis, using a binary logistic regression, was carried out by inserting all the 0.20 univariate significant variables into the model. The software used was SPSS 25.0.
RESULTS 3.1 General characteristics of the studied population A total of 416 children attended six‐grade classes in the public schools of the city of Cagnes‐sur‐mer. Among them, nine could not be examined because their consents were not obtained, and 48 were absent on the day of the study. The present study therefore examined 359 children, the study participation rate reaching 86%. The average age was 10.98 ± 0.43 years. The gender‐ratio was 0.48 (Table ). The schoolchildren were quite evenly split between the different neighborhoods of the city. No children with syndromes or disability was included in the study, as the study was led in a primary school, not in a Medico Educational Institute. 3.2 Oral data 3.2.1 Dental status The children had an overall satisfactory oral health status. More than two‐thirds (71.5%) had never had a dental decay, and only 10.6% had ever received conservative care. However, 12.6% exhibited untreated dental cavities on their permanent teeth. Based on the Silness and Loë's ( ) plaque index, nearly half of all schoolchildren showed either plaque, which could be seen with the naked eye (35.2%), or an abundance of soft matter (12.3%). Only one out of 5 children (27.1%) had no plaque at all. About 20% of all children were engaged in active orthodontic treatment. 3.2.2 The orthodontic examination consisted in – An extraoral examination of the schoolchildren, which allowed to apprehend the form of the sub‐nasal profile, the majority of which were convex or straight (48.6% and 47.5%), concave profiles being rare (3.9%). The nasolabial angle was very often normal (60.6%) and opened in 24.9% of cases. The labiomental fold was normal (60.6%) or jutting (27.7%) and rarely faded (11.7%). – The endobuccal examination revealed a canine and molar relationship evenly distributed between Classes I and II while classes III represented only 3% of the children. It also revealed an absence of crossbite in 87.7% of the cases and a bilateral crossbite in only 2.2% of the children. However, more than one‐third of the children had a shifted midline (33.2%). An increased overjet was present in half of the cases (50.3%), and the presence of a deep bite was found in 44.7% of the cases In contrast, the presence of an open bite was infrequent (about 5%) (Table ). – The examination of oral functions revealed that atypical swallowing was found in most children (87%), nasal respiration was present in only half of the children, and the resting tongue position was low in most cases (85%) (Table ). Additionally, 88% of children needed an orthodontic treatment, according to the HAS classification. 3.3 Univariate analyses exploring the relationships between the need of orthodontic treatment and other variables Overall, most children (88%) exhibited a malocclusion, regardless of gender ( p = .912) or age ( p = .18). At a functional level, malocclusions were statistically linked to a low position of the tongue ( p < .001), a primary swallowing ( p = .03) and a mixed or oral respiration ( p = .001; Table ). 3.4 Multivariate analysis Finally, to prioritize the effects of univariate significant variables on our variable of interest, a logistic regression was carried out. After completing the multivariate analysis, only two variables remained significant, the others disappearing to their benefit: an abnormal respiration (mixed or oral) and a low position of tongue at rest, have a threefold increase in risk for malocclusion (Table ).
General characteristics of the studied population A total of 416 children attended six‐grade classes in the public schools of the city of Cagnes‐sur‐mer. Among them, nine could not be examined because their consents were not obtained, and 48 were absent on the day of the study. The present study therefore examined 359 children, the study participation rate reaching 86%. The average age was 10.98 ± 0.43 years. The gender‐ratio was 0.48 (Table ). The schoolchildren were quite evenly split between the different neighborhoods of the city. No children with syndromes or disability was included in the study, as the study was led in a primary school, not in a Medico Educational Institute.
Oral data 3.2.1 Dental status The children had an overall satisfactory oral health status. More than two‐thirds (71.5%) had never had a dental decay, and only 10.6% had ever received conservative care. However, 12.6% exhibited untreated dental cavities on their permanent teeth. Based on the Silness and Loë's ( ) plaque index, nearly half of all schoolchildren showed either plaque, which could be seen with the naked eye (35.2%), or an abundance of soft matter (12.3%). Only one out of 5 children (27.1%) had no plaque at all. About 20% of all children were engaged in active orthodontic treatment. 3.2.2 The orthodontic examination consisted in – An extraoral examination of the schoolchildren, which allowed to apprehend the form of the sub‐nasal profile, the majority of which were convex or straight (48.6% and 47.5%), concave profiles being rare (3.9%). The nasolabial angle was very often normal (60.6%) and opened in 24.9% of cases. The labiomental fold was normal (60.6%) or jutting (27.7%) and rarely faded (11.7%). – The endobuccal examination revealed a canine and molar relationship evenly distributed between Classes I and II while classes III represented only 3% of the children. It also revealed an absence of crossbite in 87.7% of the cases and a bilateral crossbite in only 2.2% of the children. However, more than one‐third of the children had a shifted midline (33.2%). An increased overjet was present in half of the cases (50.3%), and the presence of a deep bite was found in 44.7% of the cases In contrast, the presence of an open bite was infrequent (about 5%) (Table ). – The examination of oral functions revealed that atypical swallowing was found in most children (87%), nasal respiration was present in only half of the children, and the resting tongue position was low in most cases (85%) (Table ). Additionally, 88% of children needed an orthodontic treatment, according to the HAS classification.
Dental status The children had an overall satisfactory oral health status. More than two‐thirds (71.5%) had never had a dental decay, and only 10.6% had ever received conservative care. However, 12.6% exhibited untreated dental cavities on their permanent teeth. Based on the Silness and Loë's ( ) plaque index, nearly half of all schoolchildren showed either plaque, which could be seen with the naked eye (35.2%), or an abundance of soft matter (12.3%). Only one out of 5 children (27.1%) had no plaque at all. About 20% of all children were engaged in active orthodontic treatment.
The orthodontic examination consisted in – An extraoral examination of the schoolchildren, which allowed to apprehend the form of the sub‐nasal profile, the majority of which were convex or straight (48.6% and 47.5%), concave profiles being rare (3.9%). The nasolabial angle was very often normal (60.6%) and opened in 24.9% of cases. The labiomental fold was normal (60.6%) or jutting (27.7%) and rarely faded (11.7%). – The endobuccal examination revealed a canine and molar relationship evenly distributed between Classes I and II while classes III represented only 3% of the children. It also revealed an absence of crossbite in 87.7% of the cases and a bilateral crossbite in only 2.2% of the children. However, more than one‐third of the children had a shifted midline (33.2%). An increased overjet was present in half of the cases (50.3%), and the presence of a deep bite was found in 44.7% of the cases In contrast, the presence of an open bite was infrequent (about 5%) (Table ). – The examination of oral functions revealed that atypical swallowing was found in most children (87%), nasal respiration was present in only half of the children, and the resting tongue position was low in most cases (85%) (Table ). Additionally, 88% of children needed an orthodontic treatment, according to the HAS classification.
Univariate analyses exploring the relationships between the need of orthodontic treatment and other variables Overall, most children (88%) exhibited a malocclusion, regardless of gender ( p = .912) or age ( p = .18). At a functional level, malocclusions were statistically linked to a low position of the tongue ( p < .001), a primary swallowing ( p = .03) and a mixed or oral respiration ( p = .001; Table ).
Multivariate analysis Finally, to prioritize the effects of univariate significant variables on our variable of interest, a logistic regression was carried out. After completing the multivariate analysis, only two variables remained significant, the others disappearing to their benefit: an abnormal respiration (mixed or oral) and a low position of tongue at rest, have a threefold increase in risk for malocclusion (Table ).
DISCUSSION According to the present epidemiological survey of 359 children, the prevalence of dental malocclusions and functional disorders is significant, associated with a high need for orthodontic treatment according to HAS criteria. Furthermore, oral respiration and low tongue position at rest are the most important factors in the prediction of malocclusion, thus encouraging the prevention and correction of functional disorders as soon as possible. This cross‐sectional descriptive epidemiological survey included 359 children, 86% participation rate, which confirms the good representativeness of our sample, in agreement with our power calculation. Fifth‐grade children (mean age 11 years) were selected, in accordance with the age reference chosen by the World Health Organization (WHO). Moreover, This age corresponds to the last year of primary school, which seems to us to be the right time to carry out a screening and refer to an orthodontist if this is necessary and the child is not already being treated. The variable of need for orthodontic treatment was defined according to the criteria selected by the HAS, which remains the French reference since 2012. 4.1 Need of orthodontic treatment Overall, among the 359 examined children, 88% exhibited a malocclusion. This figure is bigger than the one found in Souames et al. study, conducted in 12 schools in the Val d'Oise department in 2006 (Souames et al., ). In this study, 28.6% of children were on the brink of an orthodontic treatment need, and 21.3% really needed treatment. However, these results seem to be difficult to compare with the result of this present study because the index measuring the need for treatment is different. The index used by Souames et al. was the Index of Orthodontic Treatment Need (IOTN), while the HAS recommendations were not in force yet in France. In 1989, a study conducted by Brook and Shaw ( ) in Manchester showed that 32.7% of 11–12‐year‐old schoolchildren needed orthodontic treatment whereas the study conducted by Burden in Ireland in 1995 showed that 36% were in need of orthodontic treatment (Burden, ). However, Ingervall and Hedegard noted 53% of care needs in 1975 in Lapland (Ingervall & Hedegård, ). Therefore, results vary both according to the country where the study is conducted and the index chosen. 4.2 Caries status The dental health status of the examined children was rather satisfactory since 71.5% of them had never experienced dental caries. The number of untreated decay has clearly improved compared to the national figures obtained during the last survey, conducted by the French Union for Oral Health UFSBD (Union Française pour la Santé Bucco‐Dentaire, ) in 2006, where only 56% of children were caries‐free (HAS/ANAES et al., ). It therefore seems that the prevention campaigns conducted for many years in France were successful. However, the key factor of this success lies probably in the fact that the Sud région in general, and the Alpes‐Maritimes department in particular, benefit from a very high standard of living, which is known to be highly correlated to dental health status (Muller‐Bolla et al., ). 4.3 Part played by oral functions Nearly half of the examined children had abnormal respiration. Under normal circumstances, and particularly at rest, in healthy subjects, the only physiological respiratory route is the nasal passage. Oral respiration is a complement used when highly needed (physical activity, stress…) or when there is an obvious nasal obstruction. The existence of labial open bite is considered as the translation of oral or mixed respiration (Harari et al., ). This is the reason why, in conducting the functional examination, the respiration was defined as “abnormal” when spontaneous opening of the lips was seen during observing respiration. Oral respiration is, therefore pathological: the mouth being open, this induces a low position of the tongue, which leads to mandibular propulsion, hypo development of the upper jaw as well as hyper divergence and open bite (Rossi et al., ). Patients present an “adenoid” facial appearance: elongated face, opened mouth, pinched nose, under‐eye shadows, and dry lips (Raffat & Ul Hamid, ). Overall, sleep disorders with non‐restorative sleep and snores is frequently related with these malocclusions (Katyal et al., ). In this study, the high percentage of oral respiration could also be explained by the season in which the study was carried out. Indeed, at the end of spring and at the beginning of summer in Provence, many allergies occur due to an important exposure to pollens, particularly those of olive trees and cypresses. Regarding swallowing, more than 87% of children experienced abnormal swallowing in this study. Regarding swallowing, more than 87% of children showed abnormal swallowing in this study. This figure seems high, although in the literature, not all authors are unanimous about the frequency of dysfunctional swallowing. Indeed, the prevalence of primary swallowing varies from 39% for Hanson and Cohen ( ) to 75% for Launey et al. ( ). For Fletcher et al. ( ), atypical swallowing decreases with age. Moreover, the Hanson report on the prevalence of lingual propulsion depending on age also shows a decrease in lingual propulsion in mixed dentition: it would be present in 40%–50% of children in early mixed dentition at 6/7 years of age, then it would decrease to nothing in 30%–40% in late mixed dentition at 11/12 years of age (Hanson & Cohen, ). A good position of the tongue is important because it is maintained about 22 h a day. However, in the case of oral respiration, the tongue is in low position to disengage the upper airway (Garliner, ). 4.4 Interactions between variables and the need for orthodontic treatment Furthermore, this study not only aimed to estimate the oral health status and orthodontic need of children in a city in France: the objective was also to see to what extent the need for orthodontic treatment was correlated with oral dysfunction. In the multivariate analysis conducted in this study, there were two significant variables: abnormal respiration and low position of tongue. Indeed, the analysis shows that when there is abnormal respiration, the probability of needing orthodontic treatment is multiplied by approximately 3 (OR = 3.2 [1.4–7.3] 95% ). This probability is multiplied by 3.43 when the tongue is in a low position at rest (OR = 3,43 [1.7–7.1] 95% ). In agreement with the literature cited in the previous paragraph and the high rate of need for treatment found in this study, low tongue position and dysfunctional respirations are the most influential criteria regarding alveolodental dysmorphosis. It would appear that low tongue position and dysfunctional repirations are the most influential criteria regarding alveolodental dysmorphosis, but other factors not investigated in this study and worthy of further study are also to be considered, such as the impact of growth. This is in agreement with Frapier et al. ( ) who showed in 2005 the outbreak of dysmorphy related to dysfunctions. However, the presence of a primary swallow was not significantly correlated with the need for orthodontic treatment ( p = .63). These results are in agreement with the conclusions of other authors who believe that lingual propulsion during swallowing is not the cause of malocclusions (Raffat & Ul Hamid, ): the low position of the tongue, due to its period of application (Harari et al., ; Raffat & Ul Hamid, ), would most likely be the main cause of it.
Need of orthodontic treatment Overall, among the 359 examined children, 88% exhibited a malocclusion. This figure is bigger than the one found in Souames et al. study, conducted in 12 schools in the Val d'Oise department in 2006 (Souames et al., ). In this study, 28.6% of children were on the brink of an orthodontic treatment need, and 21.3% really needed treatment. However, these results seem to be difficult to compare with the result of this present study because the index measuring the need for treatment is different. The index used by Souames et al. was the Index of Orthodontic Treatment Need (IOTN), while the HAS recommendations were not in force yet in France. In 1989, a study conducted by Brook and Shaw ( ) in Manchester showed that 32.7% of 11–12‐year‐old schoolchildren needed orthodontic treatment whereas the study conducted by Burden in Ireland in 1995 showed that 36% were in need of orthodontic treatment (Burden, ). However, Ingervall and Hedegard noted 53% of care needs in 1975 in Lapland (Ingervall & Hedegård, ). Therefore, results vary both according to the country where the study is conducted and the index chosen.
Caries status The dental health status of the examined children was rather satisfactory since 71.5% of them had never experienced dental caries. The number of untreated decay has clearly improved compared to the national figures obtained during the last survey, conducted by the French Union for Oral Health UFSBD (Union Française pour la Santé Bucco‐Dentaire, ) in 2006, where only 56% of children were caries‐free (HAS/ANAES et al., ). It therefore seems that the prevention campaigns conducted for many years in France were successful. However, the key factor of this success lies probably in the fact that the Sud région in general, and the Alpes‐Maritimes department in particular, benefit from a very high standard of living, which is known to be highly correlated to dental health status (Muller‐Bolla et al., ).
Part played by oral functions Nearly half of the examined children had abnormal respiration. Under normal circumstances, and particularly at rest, in healthy subjects, the only physiological respiratory route is the nasal passage. Oral respiration is a complement used when highly needed (physical activity, stress…) or when there is an obvious nasal obstruction. The existence of labial open bite is considered as the translation of oral or mixed respiration (Harari et al., ). This is the reason why, in conducting the functional examination, the respiration was defined as “abnormal” when spontaneous opening of the lips was seen during observing respiration. Oral respiration is, therefore pathological: the mouth being open, this induces a low position of the tongue, which leads to mandibular propulsion, hypo development of the upper jaw as well as hyper divergence and open bite (Rossi et al., ). Patients present an “adenoid” facial appearance: elongated face, opened mouth, pinched nose, under‐eye shadows, and dry lips (Raffat & Ul Hamid, ). Overall, sleep disorders with non‐restorative sleep and snores is frequently related with these malocclusions (Katyal et al., ). In this study, the high percentage of oral respiration could also be explained by the season in which the study was carried out. Indeed, at the end of spring and at the beginning of summer in Provence, many allergies occur due to an important exposure to pollens, particularly those of olive trees and cypresses. Regarding swallowing, more than 87% of children experienced abnormal swallowing in this study. Regarding swallowing, more than 87% of children showed abnormal swallowing in this study. This figure seems high, although in the literature, not all authors are unanimous about the frequency of dysfunctional swallowing. Indeed, the prevalence of primary swallowing varies from 39% for Hanson and Cohen ( ) to 75% for Launey et al. ( ). For Fletcher et al. ( ), atypical swallowing decreases with age. Moreover, the Hanson report on the prevalence of lingual propulsion depending on age also shows a decrease in lingual propulsion in mixed dentition: it would be present in 40%–50% of children in early mixed dentition at 6/7 years of age, then it would decrease to nothing in 30%–40% in late mixed dentition at 11/12 years of age (Hanson & Cohen, ). A good position of the tongue is important because it is maintained about 22 h a day. However, in the case of oral respiration, the tongue is in low position to disengage the upper airway (Garliner, ).
Interactions between variables and the need for orthodontic treatment Furthermore, this study not only aimed to estimate the oral health status and orthodontic need of children in a city in France: the objective was also to see to what extent the need for orthodontic treatment was correlated with oral dysfunction. In the multivariate analysis conducted in this study, there were two significant variables: abnormal respiration and low position of tongue. Indeed, the analysis shows that when there is abnormal respiration, the probability of needing orthodontic treatment is multiplied by approximately 3 (OR = 3.2 [1.4–7.3] 95% ). This probability is multiplied by 3.43 when the tongue is in a low position at rest (OR = 3,43 [1.7–7.1] 95% ). In agreement with the literature cited in the previous paragraph and the high rate of need for treatment found in this study, low tongue position and dysfunctional respirations are the most influential criteria regarding alveolodental dysmorphosis. It would appear that low tongue position and dysfunctional repirations are the most influential criteria regarding alveolodental dysmorphosis, but other factors not investigated in this study and worthy of further study are also to be considered, such as the impact of growth. This is in agreement with Frapier et al. ( ) who showed in 2005 the outbreak of dysmorphy related to dysfunctions. However, the presence of a primary swallow was not significantly correlated with the need for orthodontic treatment ( p = .63). These results are in agreement with the conclusions of other authors who believe that lingual propulsion during swallowing is not the cause of malocclusions (Raffat & Ul Hamid, ): the low position of the tongue, due to its period of application (Harari et al., ; Raffat & Ul Hamid, ), would most likely be the main cause of it.
LIMITATIONS The need for orthodontic treatment was evaluated according to the criteria defined by the HAS. It would be advisable to evaluate this need using the IOTN index, which is the world reference index in this area, to be able to compare our results more easily with those of other research teams. This study was conducted in a screening context, and it was therefore not possible to take radiographs or dental impressions. The examinations were carried out with portable equipment, although artificial light was used when natural light alone was deemed insufficient. Indeed, in a screening context, the authorizations given by the official authorities (Hospital and Dental School of Nice) can only concern non‐interventional procedures. Thus not all of the IOTN criteria could be evaluated.
Conception and design : Laurence Lupi. Acquisition of data : Leslie Borsa and Déborah Estève. Analysis and Interpretation of data : Leslie Borsa, Déborah Estève, Carole Charavet, and Laurence Lupi. Drafting the manuscript : Leslie Borsa, Déborah Estève, Carole Charavet, and Laurence Lupi. Reviewing and editing the manuscript : Leslie Borsa, Déborah Estève, Carole Charavet, and Laurence Lupi. All authors gave final approval and agreed to be accountable for all aspects of the work.
The authors declare no conflict of interest.
|
Underweight and obesity are related to higher mortality in patients undergoing coronary angiography: The KARDIO invasive cardiology register study
|
5887265c-afe0-4cc5-b9f4-7898703ee6e4
|
10098486
|
Internal Medicine[mh]
|
INTRODUCTION Obesity is related to coronary artery disease (CAD) risk factors, such as hypertension, hyperlipidaemia, and diabetes. Patients with obesity have an increased risk of cardiovascular diseases (CVDs) and all‐cause mortality, which is partly due to the accumulation of CAD risk factors. Obesity may increase the risk of fatal CVDs due to a more extensive and diffuse form of CAD. Subsequently, obesity increases the risk of other common CAD‐related adverse events, including heart failure (HF), atrial fibrillation (AF), and sudden cardiac death (SCD). , Though a J‐shaped relationship between body mass index (BMI; kg/m 2 ) (a common measure of body weight status) and mortality has generally been reported in the general population, the relationship between the whole spectrum of BMI levels (from very low to very high) and mortality among cardiac patients is still debatable. Some epidemiological studies suggest that slightly higher BMI levels might be associated with better outcomes—particularly a lowered risk of mortality—in patients with existing HF. , This phenomenon has led to the concept of “obesity paradox” and has been observed in patients with CVDs such as acute coronary syndromes (ACSs), CAD, and AF. , , , Indeed, it has been suggested that mildly overweight patients with ST‐elevation myocardial infarction (STEMI) may have less extensive CAD and even better left ventricular systolic function and quality of life, compared to patients with normal weight or more severe obesity. A meta‐analysis of over 200,000 patients with acute myocardial infarction (AMI) reported that patients with elevated BMI had a 30%–40% lower mortality risk compared with individuals with normal BMI. Another large observational study with prospectively collected data strengthens the obesity paradox concept in patients with ACS or chronic CAD. The phenomenon of “obesity paradox” may also exist among elderly CAD patients who need invasive interventions such as percutaneous coronary intervention (PCI) or coronary artery by‐pass grafting (CABG), with a higher mortality in those patients with a very low BMI. However, there is very limited evidence on the relationship between BMI and mortality risk in cardiac patients in the contemporary era of invasive cardiology; therefore, a comprehensive evaluation based on current up‐to‐date data is needed. Previous studies that have included a variety of patients with very low to extremely high BMI levels undergoing invasive coronary angiography with long‐term mortality rates beyond 12 months are nonexistent. Using an ongoing real‐life multicentre Finnish coronary angiography register, we sought to explore whether the obesity paradox also exists in invasive cardiology practice, by investigating the association between extremes of BMI levels and overall mortality in patients who underwent coronary angiography.
METHODS 2.1 Study population This study is based on data obtained from the Finnish KARDIO registry of cardiac patients undergoing invasive diagnostic and interventional procedures. The purpose of the registry is to provide data on evidence‐based cardiac care and thereby supporting the improvement in therapies for cardiac diseases, combining data of demographic characteristics, chronic diseases, cardiovascular risk factors, coronary angiographies, and interventions (PCIs and CABGs). The KARDIO registry is updated prospectively by treating physicians and it provides users with online interactive reports monitoring the processes of care and outcomes and allowing direct comparisons over time and with other hospitals. The performing cardiologist reports patient data from each procedure on‐line via a web‐based form directly from the catheterization laboratory using hospital documents, laboratory measurements, prevalent conditions, interviews, and all details of the invasive operation procedure. The data is collected from seven Finnish cardiology centres from Western, Central, and Northern Finland. Together, these seven centres provide specialized health care for a catchment area of approximately two million inhabitants. Between January 1, 2012 and December 30, 2018, a total of 82,911 patients (over 17 years) underwent cardiac catheterization, and the original KARDIO database comprised 149,028 procedures among these included patients. The registry includes patients who underwent a diagnostic coronary angiography for diagnostic purposes or to establish disease severity in known CAD. A considerable proportion of the patients had ACS (ST‐segment elevation myocardial infarction [STEMI], non‐STE‐ACS [NSTEMI], or unstable angina pectoris [UAP]). Those who underwent revascularization (catheter‐based or surgical) and those treated conservatively were included. Patients who were referred to cardiac catheterization for valvular heart disease as the primary reason were excluded, leaving 79,738 subjects into the analyses (Supporting Information: Figure ). Missing data for one or more main variables occurred in 48,727 patients who were included in the registry. High workload of the performing cardiologist is recognized as a potential reason for the incomplete data entry. Data are therefore assumed to be missing at random (MAR). According to the Finnish national and ethical regulations on the use of hospital quality registry data for research and development purposes, written informed consent from patients is not mandatory for registration of data. Standards of care for coronary intervention procedures and related management were adopted at the discretion of the treating physicians. The National Board of Health and Welfare of Finland approved the registry and the linkage of data with the national death registry. Linkage was performed using the personal identification code (PIN), which is possessed by all Finnish citizens and permanent residents. 2.2 Clinical data collection The registry comprises data on baseline characteristics, ECG changes, biochemical markers, coronary angiography findings, medical and invasive therapy. Standards of care for interventional coronary procedures and related management were adopted at the discretion of the treating physicians. The accurate collection of data was the responsibility of the treating physicians and participating investigators. Data collected before coronary angiography includes age, sex, smoking status, hypertension, diabetes, dyslipidaemia, New York Heart Association (NYHA) functional classification, angina pectoris symptoms, kidney function, medication, symptoms, and electrocardiogram changes at entry and at specified time points (for STEMI patients), previous MI, coronary revascularization, heart failure, stroke. BMI was computed as weight in kilograms divided by the square of height in meters. Hypertension at rest was defined as hypertension confirmed by the current use of antihypertensive medication and/or SBP ≥ 140 mm Hg and/or DBP ≥ 90 mm Hg. Diabetes was defined as a clinical diagnosis of diabetes with either dietary, oral, or insulin treatment. Dyslipidaemia was defined as the current use of lipid‐lowering medication (or plasma low‐density cholesterol level of over 3.0 mmol/L). Smoking was classified as nonsmoker or current smoker. A patient was described as a current smoker if he or she had ever smoked regularly and had smoked cigarettes, cigars, or pipes within 1 month before the hospital admission. A family history of CAD was defined as positive when at least one first‐degree relative had been diagnosed with MI or CAD requiring revascularization before the age of 65 years for women and 55 years for men. Hospitalization‐related variables, including final diagnosis, therapy‐related complications, and other intervention‐related outcomes, were recorded. 2.3 All‐cause mortality events In addition to collected phenotypic data, KARDIO‐registry is also directly linked to the National Death Registry providing continuously updated information on overall mortality of all treated patients. The primary endpoint in this study was all cause‐mortality. The study design ensured that a first clinical evaluation was made at hospital discharge (a baseline visit) and follow‐up was carried out by linkage to the National Death Registry using a PIN. Follow‐up data were available by merging data from the mandatory Finnish Cause of Death Register with the KARDIO register data: merging was performed at the National Board of Health and Welfare in Finland based on the PIN. There was no loss‐to follow‐up. 2.4 Statistical analysis Continuous variables are expressed as mean (standard deviation, SD) or median (interquartile range, IQR) and categorical data were presented as frequencies (percentages). Descriptive statistics were used to summarize the baseline characteristics overall and by BMI categories. We used the BMI categories established by the World Health Organization (WHO): underweight (15 to <18.5 kg/m 2 ), normal weight (18.5 to <25 kg/m 2 ), overweight (25 to <30 kg/m 2 ; pre‐obesity), moderately obese (30 to <35 kg/m 2 ; obesity I class), severely obese (35 to <40 kg/m 2 ; obesity II class) and very severely obese (40 to <60 kg/m 2 ; obesity III class). To handle the missing data properly under the MAR assumption, we used areg impute function from Hmisc R package for multiple imputation ( m = 20 rounds). , This uses predictive mean matching (PMM) based on canonical‐correlation analysis (CCA). The imputation model may include nonlinear associations (restricted cubic splines). The model uncertainty is handled by taking a bootstrap sample from the original data at every imputation round. In addition to variables in the actual analyses, some other variables derived from visits and follow‐up time were used to improve the imputations (Supporting Information: Spreadsheet ). Cox proportional hazard modelling was used to explore the relationship of categorical BMI (normal weight as the reference level) and risk of all‐cause mortality with three different adjustment models: adjusted for age (Model 1); Model 1 plus smoking status, diabetes, hypertension, family history of CAD, sex and age‐sex‐interaction (Model 2); and Model 2 plus angiographic findings (Model 3). To explore the shape of the relationship between BMI and all‐cause mortality, we performed spline‐transformation for BMI adjusting for covariates as in Model 3; BMI of 23 kg/m 2 was set as the normal‐weight reference level because it is approximately the mean (23.0 kg/m 2 ) and the median (23.4) of BMI. Complexity of the P‐spline curve was controlled visually and degree of freedom was set to 3. Scaled Schoenfeld residuals were used to investigate proportional hazards assumption in complete cases analysis and to decide whether adjusting (categorical) variables should be treated as covariates or stratifying variables. All models were stratified by the hospital the patient visited. Also, residuals showed that diabetes, dyslipidaemia, and angiographic finding possibly violated the proportionality assumption, so they were used as stratifying variables. Then Rubin's rules were applied to get the pooled hazard ratios (HRs) and corresponding 95% confidence intervals (CIs). Subgroup analyses were performed using the following characteristics: sex, operation urgency, family history of CAD, kidney failure, and follow‐up time (truncating to 1 year and focusing on first‐year survivors). The following sensitivity analyses were applied: imputing with only one interaction term (age‐sex), using BMI calculated from PMM imputed weight and height instead of PMM‐imputed BMI, turning all stratifying variables into covariates and removing patients from two smallest hospitals. We also conducted the analyses using complete cases only, without any missing variable in the analyses. The Kaplan–Meier method was used to show survival curves for BMI categories. All analyses and graphics were carried out using R software and the following R packages: Hmisc (imputation), mice (pooling), survival (Cox models), ggplot2 (graphics), and survminer (graphics).
Study population This study is based on data obtained from the Finnish KARDIO registry of cardiac patients undergoing invasive diagnostic and interventional procedures. The purpose of the registry is to provide data on evidence‐based cardiac care and thereby supporting the improvement in therapies for cardiac diseases, combining data of demographic characteristics, chronic diseases, cardiovascular risk factors, coronary angiographies, and interventions (PCIs and CABGs). The KARDIO registry is updated prospectively by treating physicians and it provides users with online interactive reports monitoring the processes of care and outcomes and allowing direct comparisons over time and with other hospitals. The performing cardiologist reports patient data from each procedure on‐line via a web‐based form directly from the catheterization laboratory using hospital documents, laboratory measurements, prevalent conditions, interviews, and all details of the invasive operation procedure. The data is collected from seven Finnish cardiology centres from Western, Central, and Northern Finland. Together, these seven centres provide specialized health care for a catchment area of approximately two million inhabitants. Between January 1, 2012 and December 30, 2018, a total of 82,911 patients (over 17 years) underwent cardiac catheterization, and the original KARDIO database comprised 149,028 procedures among these included patients. The registry includes patients who underwent a diagnostic coronary angiography for diagnostic purposes or to establish disease severity in known CAD. A considerable proportion of the patients had ACS (ST‐segment elevation myocardial infarction [STEMI], non‐STE‐ACS [NSTEMI], or unstable angina pectoris [UAP]). Those who underwent revascularization (catheter‐based or surgical) and those treated conservatively were included. Patients who were referred to cardiac catheterization for valvular heart disease as the primary reason were excluded, leaving 79,738 subjects into the analyses (Supporting Information: Figure ). Missing data for one or more main variables occurred in 48,727 patients who were included in the registry. High workload of the performing cardiologist is recognized as a potential reason for the incomplete data entry. Data are therefore assumed to be missing at random (MAR). According to the Finnish national and ethical regulations on the use of hospital quality registry data for research and development purposes, written informed consent from patients is not mandatory for registration of data. Standards of care for coronary intervention procedures and related management were adopted at the discretion of the treating physicians. The National Board of Health and Welfare of Finland approved the registry and the linkage of data with the national death registry. Linkage was performed using the personal identification code (PIN), which is possessed by all Finnish citizens and permanent residents.
Clinical data collection The registry comprises data on baseline characteristics, ECG changes, biochemical markers, coronary angiography findings, medical and invasive therapy. Standards of care for interventional coronary procedures and related management were adopted at the discretion of the treating physicians. The accurate collection of data was the responsibility of the treating physicians and participating investigators. Data collected before coronary angiography includes age, sex, smoking status, hypertension, diabetes, dyslipidaemia, New York Heart Association (NYHA) functional classification, angina pectoris symptoms, kidney function, medication, symptoms, and electrocardiogram changes at entry and at specified time points (for STEMI patients), previous MI, coronary revascularization, heart failure, stroke. BMI was computed as weight in kilograms divided by the square of height in meters. Hypertension at rest was defined as hypertension confirmed by the current use of antihypertensive medication and/or SBP ≥ 140 mm Hg and/or DBP ≥ 90 mm Hg. Diabetes was defined as a clinical diagnosis of diabetes with either dietary, oral, or insulin treatment. Dyslipidaemia was defined as the current use of lipid‐lowering medication (or plasma low‐density cholesterol level of over 3.0 mmol/L). Smoking was classified as nonsmoker or current smoker. A patient was described as a current smoker if he or she had ever smoked regularly and had smoked cigarettes, cigars, or pipes within 1 month before the hospital admission. A family history of CAD was defined as positive when at least one first‐degree relative had been diagnosed with MI or CAD requiring revascularization before the age of 65 years for women and 55 years for men. Hospitalization‐related variables, including final diagnosis, therapy‐related complications, and other intervention‐related outcomes, were recorded.
All‐cause mortality events In addition to collected phenotypic data, KARDIO‐registry is also directly linked to the National Death Registry providing continuously updated information on overall mortality of all treated patients. The primary endpoint in this study was all cause‐mortality. The study design ensured that a first clinical evaluation was made at hospital discharge (a baseline visit) and follow‐up was carried out by linkage to the National Death Registry using a PIN. Follow‐up data were available by merging data from the mandatory Finnish Cause of Death Register with the KARDIO register data: merging was performed at the National Board of Health and Welfare in Finland based on the PIN. There was no loss‐to follow‐up.
Statistical analysis Continuous variables are expressed as mean (standard deviation, SD) or median (interquartile range, IQR) and categorical data were presented as frequencies (percentages). Descriptive statistics were used to summarize the baseline characteristics overall and by BMI categories. We used the BMI categories established by the World Health Organization (WHO): underweight (15 to <18.5 kg/m 2 ), normal weight (18.5 to <25 kg/m 2 ), overweight (25 to <30 kg/m 2 ; pre‐obesity), moderately obese (30 to <35 kg/m 2 ; obesity I class), severely obese (35 to <40 kg/m 2 ; obesity II class) and very severely obese (40 to <60 kg/m 2 ; obesity III class). To handle the missing data properly under the MAR assumption, we used areg impute function from Hmisc R package for multiple imputation ( m = 20 rounds). , This uses predictive mean matching (PMM) based on canonical‐correlation analysis (CCA). The imputation model may include nonlinear associations (restricted cubic splines). The model uncertainty is handled by taking a bootstrap sample from the original data at every imputation round. In addition to variables in the actual analyses, some other variables derived from visits and follow‐up time were used to improve the imputations (Supporting Information: Spreadsheet ). Cox proportional hazard modelling was used to explore the relationship of categorical BMI (normal weight as the reference level) and risk of all‐cause mortality with three different adjustment models: adjusted for age (Model 1); Model 1 plus smoking status, diabetes, hypertension, family history of CAD, sex and age‐sex‐interaction (Model 2); and Model 2 plus angiographic findings (Model 3). To explore the shape of the relationship between BMI and all‐cause mortality, we performed spline‐transformation for BMI adjusting for covariates as in Model 3; BMI of 23 kg/m 2 was set as the normal‐weight reference level because it is approximately the mean (23.0 kg/m 2 ) and the median (23.4) of BMI. Complexity of the P‐spline curve was controlled visually and degree of freedom was set to 3. Scaled Schoenfeld residuals were used to investigate proportional hazards assumption in complete cases analysis and to decide whether adjusting (categorical) variables should be treated as covariates or stratifying variables. All models were stratified by the hospital the patient visited. Also, residuals showed that diabetes, dyslipidaemia, and angiographic finding possibly violated the proportionality assumption, so they were used as stratifying variables. Then Rubin's rules were applied to get the pooled hazard ratios (HRs) and corresponding 95% confidence intervals (CIs). Subgroup analyses were performed using the following characteristics: sex, operation urgency, family history of CAD, kidney failure, and follow‐up time (truncating to 1 year and focusing on first‐year survivors). The following sensitivity analyses were applied: imputing with only one interaction term (age‐sex), using BMI calculated from PMM imputed weight and height instead of PMM‐imputed BMI, turning all stratifying variables into covariates and removing patients from two smallest hospitals. We also conducted the analyses using complete cases only, without any missing variable in the analyses. The Kaplan–Meier method was used to show survival curves for BMI categories. All analyses and graphics were carried out using R software and the following R packages: Hmisc (imputation), mice (pooling), survival (Cox models), ggplot2 (graphics), and survminer (graphics).
RESULTS 3.1 Patient characteristics Patient characteristics overall and according to the different BMI categories are shown in Table . The majority of patients were male (60.4%) and overall mean (standard deviation, SD) age was 65.3 (10.8) years. The overall median (interquantile range, IQR) for BMI was 27.4 (24.8–30.8) kg/m 2 . Patients with obesity were more likely to be younger and they had a higher level of common CVD factors such as dyslipidaemia, hypertension, and diabetes compared to underweight patients and those with normal BMI. Obese patients had more prevalent CAD at baseline compared to underweight patients. Underweight patients were more often females and smokers. Left ventricular ejection fraction (EF) was the lowest among underweight and very severely obese patients (Table ). Patients with very severe obesity had less 1‐ to 3‐vessel CAD compared to normal‐weight and overweight patients. Left main CAD was most common among lean patients. Invasive interventions such as PCI and CABG were performed more commonly for normal‐weight than obese patients (Table ). Supporting Information: Table involves statistics based on whether BMI is available or missing, providing data on clinical characteristics in these two categories. 3.2 Follow‐up During a median (IQR) follow‐up of 5.5 (2.5–8.6) years (445,641 person‐years at risk), a total of 11,896 all‐cause‐deaths were recorded. The P‐spline curve showed a nonlinear U‐shaped relationship between BMI and all‐cause mortality risk (Figure ); the curve based on the complete case analysis was steeper. Patients in the underweight category were at substantially increased risk of death compared to the other BMI categories. Compared to the normal weight category, the age‐adjusted HRs (95% CIs) for all‐cause mortality were 1.90 (1.49, 2.43), 0.96 (0.92, 1.01), 1.04 (0.99, 1.09), 1.08 (0.96, 1.20), 1.45 (1.22, 1.72) for underweight, preobesity, obesity class I, obesity class II and obesity class III, respectively. The HRs (95% CIs) were minimally amplified to 2.00 (1.55, 2.58), 0.92 (0.88, 0.97), 1.01 (0.95, 1.06), 1.10 (0.98, 1.23), and 1.49 (1.26, 1.78) upon further adjustment for smoking status, diabetes, hypertension, and family history of CAD and angiographic findings. 3.3 Subgroup and sensitivity analyses In subgroup analyses, the associations did not vary importantly by sex, family history of CAD, and follow‐up time. On the other hand, when subjects with elective and urgent procedures were analysed separately, the HR (95% CI) for mortality in underweight patients was extreme when the procedure was elective 3.09 (2.13, 4.48) and lower when procedure was urgent 1.50 (1.06, 2.14). Kidney failure did not seem to modify the association between BMI and all‐cause mortality risk (Supporting Information: Figure ). Changing stratifying variables into covariates produced almost identical results, suggesting no major violation of the proportional hazards assumptions. Using BMI calculated from PMM‐imputed weight and height instead of PMM‐imputed BMI gave a slightly higher HR for underweight (2.11 [1.66, 2.69]) but the rest of the estimates remained nearly unchanged. Limiting interactions into age‐sex‐interaction in the imputation phase diminished the HRs only marginally (1.86 [1.43, 2.42] in underweight) whereas focusing on the biggest hospitals amplified the effects to some extent (HR 2.07 [1.59, 2.70] in underweight). In general, the results seem to be robust for changes in the analysis settings.
Patient characteristics Patient characteristics overall and according to the different BMI categories are shown in Table . The majority of patients were male (60.4%) and overall mean (standard deviation, SD) age was 65.3 (10.8) years. The overall median (interquantile range, IQR) for BMI was 27.4 (24.8–30.8) kg/m 2 . Patients with obesity were more likely to be younger and they had a higher level of common CVD factors such as dyslipidaemia, hypertension, and diabetes compared to underweight patients and those with normal BMI. Obese patients had more prevalent CAD at baseline compared to underweight patients. Underweight patients were more often females and smokers. Left ventricular ejection fraction (EF) was the lowest among underweight and very severely obese patients (Table ). Patients with very severe obesity had less 1‐ to 3‐vessel CAD compared to normal‐weight and overweight patients. Left main CAD was most common among lean patients. Invasive interventions such as PCI and CABG were performed more commonly for normal‐weight than obese patients (Table ). Supporting Information: Table involves statistics based on whether BMI is available or missing, providing data on clinical characteristics in these two categories.
Follow‐up During a median (IQR) follow‐up of 5.5 (2.5–8.6) years (445,641 person‐years at risk), a total of 11,896 all‐cause‐deaths were recorded. The P‐spline curve showed a nonlinear U‐shaped relationship between BMI and all‐cause mortality risk (Figure ); the curve based on the complete case analysis was steeper. Patients in the underweight category were at substantially increased risk of death compared to the other BMI categories. Compared to the normal weight category, the age‐adjusted HRs (95% CIs) for all‐cause mortality were 1.90 (1.49, 2.43), 0.96 (0.92, 1.01), 1.04 (0.99, 1.09), 1.08 (0.96, 1.20), 1.45 (1.22, 1.72) for underweight, preobesity, obesity class I, obesity class II and obesity class III, respectively. The HRs (95% CIs) were minimally amplified to 2.00 (1.55, 2.58), 0.92 (0.88, 0.97), 1.01 (0.95, 1.06), 1.10 (0.98, 1.23), and 1.49 (1.26, 1.78) upon further adjustment for smoking status, diabetes, hypertension, and family history of CAD and angiographic findings.
Subgroup and sensitivity analyses In subgroup analyses, the associations did not vary importantly by sex, family history of CAD, and follow‐up time. On the other hand, when subjects with elective and urgent procedures were analysed separately, the HR (95% CI) for mortality in underweight patients was extreme when the procedure was elective 3.09 (2.13, 4.48) and lower when procedure was urgent 1.50 (1.06, 2.14). Kidney failure did not seem to modify the association between BMI and all‐cause mortality risk (Supporting Information: Figure ). Changing stratifying variables into covariates produced almost identical results, suggesting no major violation of the proportional hazards assumptions. Using BMI calculated from PMM‐imputed weight and height instead of PMM‐imputed BMI gave a slightly higher HR for underweight (2.11 [1.66, 2.69]) but the rest of the estimates remained nearly unchanged. Limiting interactions into age‐sex‐interaction in the imputation phase diminished the HRs only marginally (1.86 [1.43, 2.42] in underweight) whereas focusing on the biggest hospitals amplified the effects to some extent (HR 2.07 [1.59, 2.70] in underweight). In general, the results seem to be robust for changes in the analysis settings.
DISCUSSION 4.1 Main findings This study showed that very severely obese and underweight patients who underwent invasive coronary angiography had an increased risk of all‐cause death compared to normal‐weight patients. Our contemporary register data also showed that normal and overweight patients had the lowest risk of overall death, which suggest that the obesity paradox exists also in patients undergoing an invasive coronary procedure. The results show a bimodal mortality pattern across the whole spectrum of BMI categories. The associations remained robust in subgroup and sensitivity analyses. High BMI levels were associated with common cardiac comorbidities such as diabetes mellitus, hypertension, and dyslipidaemia. 4.2 Previous studies Previous studies have suggested that overweight or obese patients with CAD may have lower morbidity and mortality than their leaner counterparts. , , After coronary revascularization procedures, such as PCI or CABG, the risk of total and CVD mortality and MI rate was highest among underweight patients as defined by low BMI. Indeed, the overall mortality rate was lowest among slightly overweight patients. Some explanations have been proposed for the observed “obesity paradox” in cardiac patients. For example, younger cardiac patients may have less extensive and non‐diffuse form of CAD, which is easier to treat invasively than more advanced disease; which could be one of the main factors contributing to this phenomenon. It is likely that exposure time to common atherosclerotic risk factors on the development of CVDs is shorter in younger patients. Second, younger patients with CVDs may have a stronger physiological reserve to correct abnormal conditions; younger patients who present earlier tend to have effective pharmacological treatment from an early age. Our current study also confirmed that patients with a high BMI were slightly younger than those with a low BMI, while obese patients had a higher prevalence of cardiovascular risk factors. Central obesity is associated with insulin resistance and an atherogenic lipoprotein profile, and is independently related to CVD mortality in patients with CAD. , 4.3 Mechanisms and explanations Major bleeding complications are somewhat lower in overweight and moderately obese patients. Excess dosing of anticoagulant and antiplatelet drugs may cause more harm in very lean, aged patients, whereas bleeding is less likely to occur in overweight and obese patients ; bleeding is associated with higher short‐and long‐term mortality rates, which may explain our results to some extent. Low BMI reflects lean body mass, which is associated with poorer cardiorespiratory and muscular fitness, both of which are related to adverse clinical outcomes. , Very low body mass may be a marker of other underlying diseases, explaining the higher mortality risk in these patients. However, the associations were consistent in subgroup analysis by follow‐up time (≤1 vs. >1 year), which suggest that underlying diseases, such as cancer among very lean patients, do not totally explain the observed associations. The obesity paradox, or the “BMI paradox,” has also been observed in patient with other chronic disease conditions such as congestive heart failure, chronic kidney disease on haemodialysis, malignancies, and peripheral artery disease, respiratory conditions, infections, as well as osteoarthritis. , , Normal weight and mildly obese patients may be an optimal group for all kind of treatments, including antihypertensive and lipid‐lowering therapies. Among patients with suspected CAD referred for coronary computed tomographic angiography, patients with higher BMI had greater prevalence, extent, and severity of CAD that was not totally explained by the presence of traditional risk factors. The increased mortality risk was observed in patients with very severe obesity. Mild‐to‐moderately overweight patients may be less likely to present with serious acute coronary events leading to fatal outcomes, including cardiac arrest. There is also some evidence that a small amount of adipose tissue might provide some cardioprotective effects by producing hormones such as leptin and adiponectin, which is a molecule that protects cardiac muscle from ischemia/reperfusion injury by inhibition of iNOS and nicotinamide adenine dinucleotide phosphate‐oxidase protein expression. , The body weight loss associated decrease in endocannabinoid (EC) plasma levels of anandamide (AEA) and increases in adiponectin plasma levels were associated with the normalization of coronary circulatory function after weight loss, signifying that the imbalance between ECs and adipocytokines may be seen as an important determinant of coronary circulatory function in obesity. Increased AEA and 2‐arachidonoylglycerol, which are predominantly produced and released from the adipose tissue in obese individuals, are associated with coronary circulatory dysfunction. Overweight may also be protective against malnutrition following a major cardiac event or invasive procedure in advanced CAD and heart failure. These factors may at least partly explain the protective effects of overweight among cardiac patients. , However, we have no data on factors such as the details in body composition, including decrease in muscle mass that occurs with aging as well as underlying chronic diseases that may have led to involuntary weight loss. Previous studies have found that both unfit and inactive patients have significantly higher risk of death compared to fit and active subjects regardless of BMI levels. , , Habitual physical inactivity is a significant contributor to the increased mortality risk in obese individuals since sedentary lifestyle is more prevalent in obese than leaner people. Higher cardiorespiratory fitness is associated with improved mortality events across all BMI categories, and the prognostic benefits of overweight/obesity disappear among most fit patients but persists in those with low fitness. A complex interplay between fitness and fatness contribute to an individual's CVD and mortality risk profile. Regular physical activity can substantially influence the body fat and its distribution on the body. Physical activity markedly reduces the volume of visceral adipose tissues at varying degrees depending on the amount and intensity of exercise training. High levels of fitness largely offset the adverse effects of excess adiposity, which is also referred as the “fat and fit” phenomenon. , Exercise training and increased physical activity, with the goal of maintaining or improving cardiorespiratory fitness, are efficient strategies for primary and secondary prevention of CVDs across BMI levels. , 4.4 Strength and limitations A major strength of the KARDIO register study is its ongoing observational prospective nature which provides real‐world contemporary data on invasive cardiology interventions and outcomes in Finland. All included hospitals contributed data to the register; these hospitals are sole providers of invasive cardiology treatment in a centralized public health care system. Other strengths include the large sample with adequate numbers of normal weight, overweight, and obese patients across the whole BMI spectrum; its representativeness of invasive cardiology patients; the comprehensive panel of clinical characteristics, comorbidities, and lifestyle characteristics which enabled adequate adjustment for potential confounders. Adequate handling of missing data via multiple imputation increases accuracy and reliability of conclusions; more representative data among patients have been included and especially the missing data pattern can be taken into account when data are MAR. The representativeness of the registry‐based cohort was also strengthened by the inclusion of consecutive patients with varying indications for coronary angiography, derived from the general population at centres with different levels of care, representing nearly half of the hospitals, which provide invasive coronary angiography in Finland. Several limitations have to be taken into consideration. First, this is a nonrandomized observational study that provides evidence on the association between BMI and mortality and thus causality cannot be claimed and will need to be proved. Second, we evaluated all‐cause mortality rather than cardiovascular mortality because cause‐specific death data were not available. Third, we cannot rule out the possibility of residual confounding due to possible unmeasured confounding factors. Fourth, we did not capture measurements of body composition or body fat distribution, such as waist circumference (central obesity) and fat percentage, which have been suggested to be more closely related to adiposity‐related outcomes. Although BMI is the most commonly used measure of obesity, it cannot distinguish between adipose and lean body mass tissue or central and peripheral adiposity. Fifth, we were unable to control for the role of unintentional weight loss and medication use during the follow‐up. Previous studies suggest that fat‐free mass could serve as a better physiological scaling factor than BMI, which cannot separate body composition, including both fat and fat‐free mass. Indeed, scales, such as the ratio of body mass and height for BMI, commonly used in clinical practice may underestimate the physiological rationale for using BMI as a scaling factor and a marker of CVD risk. On the other hand, the use of BMI is still endorsed by the WHO to classify obesity worldwide, given its simple and easily quantifiable nature. Other limitations include the use of single baseline measurements of BMI and other time‐dependent cofactors such as medication changes. We did not have data on the use of guideline‐recommended longer‐term secondary prevention therapy, which might also have explained some of the differences in mortality among BMI groups; data on secondary prevention during the follow‐up were not collected. However, the internal consistency of results and the overall consistency of our observations with earlier studies suggest that our findings reflect the current clinical scenario. We did not have data on other characteristics, such as physical activity, socioeconomic status, or cardiorespiratory fitness, and thus we cannot exclude the possibility that residual confounding from unmeasured causal factors unevenly distributed between BMI groups may have influenced our results. However, our main analysis included age and smoking status, which are important factors that could lead to involuntary weight loss. There is documented evidence of an interplay between fitness, obesity, and mortality, but this could not be investigated because of the lack of fitness data. We had no data on detailed assessments for contrast agent use and laboratory values, including kidney function markers, after angiography and/or PCI. Patients with valvular heart diseases were excluded from the BMI and mortality analyses as the etiology of these conditions are likely other than metabolic abnormalities in CAD due to obesity and excessive body fatness. Also, we did not have data on recent weight loss before inclusion; very lean patients may have underlying chronic conditions such as cancer and pulmonary disease. In our large study subsidiary analyses, the extent of significant CAD did not significantly alter our results, but left ventricular EF could not be included in the multivariable models due to the high amount of missing data in the registry.
Main findings This study showed that very severely obese and underweight patients who underwent invasive coronary angiography had an increased risk of all‐cause death compared to normal‐weight patients. Our contemporary register data also showed that normal and overweight patients had the lowest risk of overall death, which suggest that the obesity paradox exists also in patients undergoing an invasive coronary procedure. The results show a bimodal mortality pattern across the whole spectrum of BMI categories. The associations remained robust in subgroup and sensitivity analyses. High BMI levels were associated with common cardiac comorbidities such as diabetes mellitus, hypertension, and dyslipidaemia.
Previous studies Previous studies have suggested that overweight or obese patients with CAD may have lower morbidity and mortality than their leaner counterparts. , , After coronary revascularization procedures, such as PCI or CABG, the risk of total and CVD mortality and MI rate was highest among underweight patients as defined by low BMI. Indeed, the overall mortality rate was lowest among slightly overweight patients. Some explanations have been proposed for the observed “obesity paradox” in cardiac patients. For example, younger cardiac patients may have less extensive and non‐diffuse form of CAD, which is easier to treat invasively than more advanced disease; which could be one of the main factors contributing to this phenomenon. It is likely that exposure time to common atherosclerotic risk factors on the development of CVDs is shorter in younger patients. Second, younger patients with CVDs may have a stronger physiological reserve to correct abnormal conditions; younger patients who present earlier tend to have effective pharmacological treatment from an early age. Our current study also confirmed that patients with a high BMI were slightly younger than those with a low BMI, while obese patients had a higher prevalence of cardiovascular risk factors. Central obesity is associated with insulin resistance and an atherogenic lipoprotein profile, and is independently related to CVD mortality in patients with CAD. ,
Mechanisms and explanations Major bleeding complications are somewhat lower in overweight and moderately obese patients. Excess dosing of anticoagulant and antiplatelet drugs may cause more harm in very lean, aged patients, whereas bleeding is less likely to occur in overweight and obese patients ; bleeding is associated with higher short‐and long‐term mortality rates, which may explain our results to some extent. Low BMI reflects lean body mass, which is associated with poorer cardiorespiratory and muscular fitness, both of which are related to adverse clinical outcomes. , Very low body mass may be a marker of other underlying diseases, explaining the higher mortality risk in these patients. However, the associations were consistent in subgroup analysis by follow‐up time (≤1 vs. >1 year), which suggest that underlying diseases, such as cancer among very lean patients, do not totally explain the observed associations. The obesity paradox, or the “BMI paradox,” has also been observed in patient with other chronic disease conditions such as congestive heart failure, chronic kidney disease on haemodialysis, malignancies, and peripheral artery disease, respiratory conditions, infections, as well as osteoarthritis. , , Normal weight and mildly obese patients may be an optimal group for all kind of treatments, including antihypertensive and lipid‐lowering therapies. Among patients with suspected CAD referred for coronary computed tomographic angiography, patients with higher BMI had greater prevalence, extent, and severity of CAD that was not totally explained by the presence of traditional risk factors. The increased mortality risk was observed in patients with very severe obesity. Mild‐to‐moderately overweight patients may be less likely to present with serious acute coronary events leading to fatal outcomes, including cardiac arrest. There is also some evidence that a small amount of adipose tissue might provide some cardioprotective effects by producing hormones such as leptin and adiponectin, which is a molecule that protects cardiac muscle from ischemia/reperfusion injury by inhibition of iNOS and nicotinamide adenine dinucleotide phosphate‐oxidase protein expression. , The body weight loss associated decrease in endocannabinoid (EC) plasma levels of anandamide (AEA) and increases in adiponectin plasma levels were associated with the normalization of coronary circulatory function after weight loss, signifying that the imbalance between ECs and adipocytokines may be seen as an important determinant of coronary circulatory function in obesity. Increased AEA and 2‐arachidonoylglycerol, which are predominantly produced and released from the adipose tissue in obese individuals, are associated with coronary circulatory dysfunction. Overweight may also be protective against malnutrition following a major cardiac event or invasive procedure in advanced CAD and heart failure. These factors may at least partly explain the protective effects of overweight among cardiac patients. , However, we have no data on factors such as the details in body composition, including decrease in muscle mass that occurs with aging as well as underlying chronic diseases that may have led to involuntary weight loss. Previous studies have found that both unfit and inactive patients have significantly higher risk of death compared to fit and active subjects regardless of BMI levels. , , Habitual physical inactivity is a significant contributor to the increased mortality risk in obese individuals since sedentary lifestyle is more prevalent in obese than leaner people. Higher cardiorespiratory fitness is associated with improved mortality events across all BMI categories, and the prognostic benefits of overweight/obesity disappear among most fit patients but persists in those with low fitness. A complex interplay between fitness and fatness contribute to an individual's CVD and mortality risk profile. Regular physical activity can substantially influence the body fat and its distribution on the body. Physical activity markedly reduces the volume of visceral adipose tissues at varying degrees depending on the amount and intensity of exercise training. High levels of fitness largely offset the adverse effects of excess adiposity, which is also referred as the “fat and fit” phenomenon. , Exercise training and increased physical activity, with the goal of maintaining or improving cardiorespiratory fitness, are efficient strategies for primary and secondary prevention of CVDs across BMI levels. ,
Strength and limitations A major strength of the KARDIO register study is its ongoing observational prospective nature which provides real‐world contemporary data on invasive cardiology interventions and outcomes in Finland. All included hospitals contributed data to the register; these hospitals are sole providers of invasive cardiology treatment in a centralized public health care system. Other strengths include the large sample with adequate numbers of normal weight, overweight, and obese patients across the whole BMI spectrum; its representativeness of invasive cardiology patients; the comprehensive panel of clinical characteristics, comorbidities, and lifestyle characteristics which enabled adequate adjustment for potential confounders. Adequate handling of missing data via multiple imputation increases accuracy and reliability of conclusions; more representative data among patients have been included and especially the missing data pattern can be taken into account when data are MAR. The representativeness of the registry‐based cohort was also strengthened by the inclusion of consecutive patients with varying indications for coronary angiography, derived from the general population at centres with different levels of care, representing nearly half of the hospitals, which provide invasive coronary angiography in Finland. Several limitations have to be taken into consideration. First, this is a nonrandomized observational study that provides evidence on the association between BMI and mortality and thus causality cannot be claimed and will need to be proved. Second, we evaluated all‐cause mortality rather than cardiovascular mortality because cause‐specific death data were not available. Third, we cannot rule out the possibility of residual confounding due to possible unmeasured confounding factors. Fourth, we did not capture measurements of body composition or body fat distribution, such as waist circumference (central obesity) and fat percentage, which have been suggested to be more closely related to adiposity‐related outcomes. Although BMI is the most commonly used measure of obesity, it cannot distinguish between adipose and lean body mass tissue or central and peripheral adiposity. Fifth, we were unable to control for the role of unintentional weight loss and medication use during the follow‐up. Previous studies suggest that fat‐free mass could serve as a better physiological scaling factor than BMI, which cannot separate body composition, including both fat and fat‐free mass. Indeed, scales, such as the ratio of body mass and height for BMI, commonly used in clinical practice may underestimate the physiological rationale for using BMI as a scaling factor and a marker of CVD risk. On the other hand, the use of BMI is still endorsed by the WHO to classify obesity worldwide, given its simple and easily quantifiable nature. Other limitations include the use of single baseline measurements of BMI and other time‐dependent cofactors such as medication changes. We did not have data on the use of guideline‐recommended longer‐term secondary prevention therapy, which might also have explained some of the differences in mortality among BMI groups; data on secondary prevention during the follow‐up were not collected. However, the internal consistency of results and the overall consistency of our observations with earlier studies suggest that our findings reflect the current clinical scenario. We did not have data on other characteristics, such as physical activity, socioeconomic status, or cardiorespiratory fitness, and thus we cannot exclude the possibility that residual confounding from unmeasured causal factors unevenly distributed between BMI groups may have influenced our results. However, our main analysis included age and smoking status, which are important factors that could lead to involuntary weight loss. There is documented evidence of an interplay between fitness, obesity, and mortality, but this could not be investigated because of the lack of fitness data. We had no data on detailed assessments for contrast agent use and laboratory values, including kidney function markers, after angiography and/or PCI. Patients with valvular heart diseases were excluded from the BMI and mortality analyses as the etiology of these conditions are likely other than metabolic abnormalities in CAD due to obesity and excessive body fatness. Also, we did not have data on recent weight loss before inclusion; very lean patients may have underlying chronic conditions such as cancer and pulmonary disease. In our large study subsidiary analyses, the extent of significant CAD did not significantly alter our results, but left ventricular EF could not be included in the multivariable models due to the high amount of missing data in the registry.
CONCLUSIONS According to data from an ongoing multicentre cardiology Finnish registry study comprising patients undergoing coronary angiography, underweight and obesity class III are related to increased mortality risk, whereas preobesity and obesity class I are associated with decreased mortality risk. Our results support the concept of the obesity paradox among patients undergoing invasive coronary angiography.
Jari A. Laukkanen : Conceptualization, Methodology, Writing–original draft, Writing–review and editing, Visualization. Setor K. Kunutsor : Conceptualization, Methodology, Writing–original draft, Formal Writing–review and editing, Visualization, Formal analysis. Jussi Hernesniemi : Conceptualization, Methodology, Formal Writing–review and editing. Jaakko Immonen : Methodology, Formal Writing–review and editing, Visualization, Formal analysis. Markku Eskola : Conceptualization, Methodology, Formal Writing–review and editing. Francesco Zaccardi : Methodology, Formal Writing–review and editing, Formal analysis. Matti Niemelä : Conceptualization, Methodology, Formal Writing–review and editing. Timo Mäkikallio : Conceptualization, Methodology, Formal Writing–review and editing. Magnus Hagnäs : Formal Writing–review and editing. Jarkko Piuhola : Formal Writing–review and editing. Jukka Juvonen : Formal Writing–review and editing. Jussi Sia : Formal Writing–review and editing. Juha Rummukainen : Formal Writing–review and editing. Kari Kervinen : Conceptualization, Formal Writing–review and editing. Juha Karvanen : Methodology, Formal Writing–review and editing. Kjell Nikus : Conceptualization, Methodology, Formal Writing–review and editing.
The authors declare no conflict of interest.
Figurementary Figure 1. Study flow chart. Click here for additional data file. Figurementary Figure 2. Association between body mass index and mortality in patients with and without kidney failure. Click here for additional data file. Supplementary Table 1. Characteristics among patients with and without body mass index data. Click here for additional data file. Supplementary information. Click here for additional data file.
|
Driving quality improvement with nudges: True interventions in cardiology
|
75624173-3790-44d8-aa1a-9363e20c7dd1
|
10098501
|
Internal Medicine[mh]
|
Dr. A. H. Seto has received research grants from Philips, Acist, and Pfizer, and is a speaker for Terumo, General Electric, and Janssen, and is a consultant for Frond Medical.
|
Do ectomycorrhizal exploration types reflect mycelial foraging strategies?
|
4da29c6d-7f66-43cf-8990-3a7d0dfce22d
|
10098516
|
Microbiology[mh]
|
Ectomycorrhizal fungi link tree roots to the soil environment by forming extraradical mycelium that extends into the soil from the mycorrhizal root tip (Smith & Read, ). The extent and morphology of the extraradical mycelium are important traits that may be linked to functional variation among species and genera (Agerer, ), which may relate further to ecosystem processes. For example, mycelium is an important precursor of soil organic matter (Clemmensen et al ., ; Adamczyk et al ., ), and morphological differences have been linked to decomposer capacity (Clemmensen et al ., ; Argiroff et al ., ). Ectomycorrhizal fungi can be classified into different ‘soil exploration types’ based on general morphological traits of the colonised root tips and emanating mycelium. These exploration types have been hypothesised to reflect the extent and manner of extraradical mycelial proliferation in the soil. The ‘contact’ type is described as having dense, smooth, hydrophilic mantles and only few emanating hyphae, while the ‘short‐distance’ type produces abundant short, nonaggregated hyphae in the near vicinity of the root tip. By contrast, the ‘medium‐distance smooth’ and ‘long‐distance’ types produce little extraradical mycelium close to the root but form cords, which vary in length and hydrophobicity. The ‘medium‐distance fringe’ and ‘mat’ types form extensive mycelia with many aggregated, hydrophobic cords (Agerer, ). The medium‐distance fringe, mat and long‐distance types have been associated with high mycelial biomass production (Hobbie & Agerer, ). Exploration types have, to some extent, been found to reflect niche differentiation of ectomycorrhizal fungi. Those with none or few cords (contact, short‐distance and medium‐distance smooth) have been proposed to maximise the area of hydrophilic hyphae that extend into the soil and thereby promote rapid uptake of mobile N and decrease leaching (Hobbie & Agerer, ; Bahr et al ., ). Types with hydrophobic mantles and cords (medium‐distance fringe and long‐distance) may instead display more directed growth towards discrete patches of immobile, organic resources (Finlay & Read, ; Cairney, ; Hobbie & Agerer, ). Furthermore, ectomycorrhizal fungi may respond to small‐scale variation in substrate quality by adapting local mycelial proliferation and spatial distribution (Rosling et al ., ; Kluting et al ., ). Exploration types have also been suggested to reflect patterns of community assembly via their different abilities to colonise new roots, with cord‐forming types being more successful in habitats with low root density (Peay et al ., ). It is important to point out that the exploration types originally were defined based on morphological investigations of root tips (Agerer, ) and hypotheses about the extent and foraging patterns of extraradical mycelia have largely been extrapolated from the amount and morphology of hyphae emanating from the mantle and often based on a few species (Weigt et al ., ). Although exploration types are used as equivalents to foraging strategies (Tedersoo & Smith, ), whether they correspond to systematic and consistent differences in soil foraging remains uncertain. In comparisons between fungal communities on roots and in soil, species forming contact‐type mycorrhizas were underrepresented in the soil (Genney et al ., ; Kjøller, ), implying a poor ability to forage for nutrients away from the roots. Kjøller ( ) found that species with exploration types assumed to form extraradical mycelia (short‐distance, medium‐distance and long‐distance) occurred abundantly in root‐free ingrowth mesh bags relative to roots. Similarly, Parrent & Vilgalys ( ) found that Tylospora (short‐distance) and Amanita (medium‐distance smooth) were prolific in ingrowth mesh bags and in bulk soil, although rarely observed as ectomycorrhizas, suggesting a high capacity to forage for nutrients away from the roots. Despite similarities in the results of these studies, divergent patterns have been observed for the long‐distance genus Suillus . Parrent & Vilgalys ( ) found Suillus to be an extensive soil coloniser despite being rare on roots, while Genney et al . ( ) observed the opposite relationship. Thus, more empirical evidence is needed to underpin a better understanding of how exploration types relate to proliferation of extraradical mycelium and selective colonisation of soil niches. In this study, we conducted a ‘cafeteria experiment’ (Krebs, ), in which we incubated root‐excluding mycelial ingrowth mesh bags filled with different soil and sand substrates in mature, boreal Picea abies forests during one growing season. We investigated whether ectomycorrhizal fungi assigned to the same exploration type share common foraging patterns by comparing ectomycorrhizal communities in the ingrowth bags with those on adjacent fine roots, using DNA‐based metabarcoding. DNA has many drawbacks as a marker of mycelial biomass (Baldrian et al ., ). For instance, copy numbers per hyphal material vary between taxa and most likely also between different tissue types within taxa. Nevertheless, DNA metabarcoding enables semiquantitative assessment (Castaño et al ., ) of colonisation patterns of different fungal taxa and is particularly useful to evaluate relative differences among communities. Furthermore, we can provide information for taxa that are difficult to isolate, for which information from laboratory microcosms is scarce. The cafeteria experiment setup has been used commonly in animal ecology, where inference of ecology can be made based on the choices of animals when offered different food sources. Here, we used a similar methodology to investigate the extent and selectivity of extraradical mycelial foraging by different ectomycorrhizal fungi. The cafeterias offered a set of six different mesh bags, filled either with soil with varying pH and organic matter and nutrient contents, or with inert sand with or without apatite as a phosphorus (P) source. To capture a larger diversity of fungal species and foraging traits across different environments, cafeterias were incubated in 10 forests that varied in soil pH and inorganic N availability. Based on the general morphological traits of their exploration types, we hypothesised that ectomycorrhizal fungi forage for soil resources in different ways. Contact type mycorrhizal fungi were expected to have low abundance in all bags due to their limited extension into the soil. The short‐distance and medium‐distance smooth exploration types, which form hydrophilic mycelia that absorb soluble, low‐molecular‐size nutrients from the surrounding soil, were expected to colonise all substrates (including inert sand) without preference, in a space‐filling manner. The hydrophobic medium‐distance fringe and long‐distance exploration types are thought to forage for immobile organic resources by forming cords and are expected to explore the ingrowth bags, particularly those with soil, with extensively proliferating extraradical mycelium (Finlay & Read, ; Leake et al ., ) in a manner analogous to saprotrophic cord‐forming fungi (Boddy, ). Furthermore, if fungal micro‐niches are determined by direct and local environmental filtering of mycelial colonisation (Rosling et al ., ; Kluting et al ., ), we expected fungal taxa to diverge in their colonisation of different soil substrates. Specifically, we expected species of short‐distance and medium‐distance smooth types, which are relatively more abundant in nutrient‐rich environments (Moeller et al ., ; Sterkenburg et al ., ; Defrenne et al ., ; Pellitier & Zak, ), to predominantly colonise more nutrient‐rich soil substrates, also on the local ‘cafeteria scale’. By contrast, we expected cord‐forming species (mainly of medium‐fringe type), which are often abundant in nutrient‐poor environments, to be relatively more abundant in nutrient‐poor substrates due to lower competition from other ectomycorrhizal fungi. Finally, we amended sand bags with apatite to investigate preferential colonisation by specific genera potentially involved in P mining by mineral weathering. Such a trait may be a competitive advantage in N‐rich but P‐limited forests, and increased ectomycorrhizal biomass production has been observed in apatite‐amended ingrowth bags in nemo‐boreal P. abies forests subjected to high rates of N deposition (14.5 kg ha −1 yr −1 ) (Wallander & Thelin, ; Almeida et al ., ).
Ingrowth mesh bags Aiming for a high variation in soil properties, organic topsoil or mull‐rich mineral soil (top 10 cm) was collected in autumn 2017 from four Swedish Picea abies (L.) H. Karst forests. Two soils were collected in central Sweden and two in southern Sweden. One of the southern soils had been subjected to P fertilisation. Some chemical characteristics of the soils are described in more detail in Table . Green parts of mosses and roots coarser than 2 mm in diameter were removed, and the soils were stored in bags at room temperature and in darkness for 17 months to reduce background levels of ectomycorrhizal DNA (Bååth et al ., ). The soils remained moist throughout the storage period. After 17 months, the soils were frozen, ground in a custom‐built freeze‐mill, sieved through 2 mm mesh, soaked in deionised water to wash out soluble nutrients and drained. The soils were dried at 40°C and filled into cylindrical mesh bags (8 cm long, 2.5 cm diameter; 50 μm mesh size; Sintab Product AB, Malmö, Sweden), which allowed ingrowth of fungal mycelium but excluded tree roots (Wallander et al ., ); c . 7 g of organic topsoil and c . 21 g of mull soil were used to fill the bags. Additional bags were filled with either sand ( c . 40 g; 0.36–2.0 mm; 99.6% SiO 2 ; Silversand 90; Sibelco Nordic AB, Västerås, Sweden) or sand mixed with 1% apatite (Krantz, Bonn, Germany; sourced from Madagascar) with a grain size of 0.65–2.00 mm. pH of the soil substrates was measured in a 1 : 5 v/v ratio of soil and deionised H 2 O with an 855 Robotic Titrosampler and an Aquatrode Plus combined pH electrode (Metrohm, Herisau, Switzerland) at collection and after 17 months of preincubation in room temperature. After sieving, washing and drying the preincubated soils, inorganic N concentrations (NH 4 + and NO 3 − ) were measured by extraction in a 1 : 2.5 w/w ratio of soil and 2 M KCl and analysed on an autoanalyser (Bran + Luebbe XY‐2 Sampler; Seal Analytical Inc., Emu Plains, NSW, Australia). Organic matter concentration was determined by loss on ignition at 550°C for 5 h, and C/N was measured in a combustion elemental analyser (TruMac CN; Leco, St Joseph, MI, USA). Site selection and field incubation Ten mature (> 70 yr) P. abies‐ dominated forests in central Sweden (latitude: 59.2–60.5°N) were selected for field incubation (Table ). Forests with contrasting soil fertility were selected based on visual assessments (e.g. composition of understorey vegetation, contribution of deciduous trees and soil type). More productive sites had moder/mineral soils, some contribution of deciduous trees ( Betula , Populus , Corylus ) and an understorey consisting of mosses ( Rhytidiadelphus sp., Hylocomium splendens , Ptilium crista‐castrensis and Pleurozium schreberi ), grasses and ferns. Less productive sites had podzolised soil with a distinct organic layer (organic topsoil) overlying mineral soil, some contribution of Pinus sylvestris and understorey vegetation consisting of mosses ( Hylocomium splendens and Pleurozium schreberi ) and dwarf shrubs ( Vaccinium myrtillus and Vaccinium vitis ‐ idaea ). Bags were soaked in deionised H 2 O for a couple of minutes and placed in holes made by removing a soil core (2.5 cm in diameter) with a metal soil corer. At sites without a distinct organic layer, bags were placed vertically down to 8 cm depth from the soil surface and at sites with podzol soils, they were placed with the bottom of the bag at the organic–mineral soil interface. The six bags, containing different substrates (Table ), were grouped in five replicate ‘cafeterias’ per site, spaced at least 5 m apart, each containing one bag of each substrate in a circle with 1 m diameter and even spacing of bags (Supporting Information Fig. ). In total, 10 sites × 6 substrates × 5 replicates = 300 bags were incubated. The incubation period lasted 153–160 d (May–November), after which the bags were retrieved from the soil (277 bags were recovered), placed individually in 50 ml tubes and frozen at −20°C within 8 h. At the time of bag collection, two soil cores (3 cm in diameter) were sampled from the middle of each cafeteria, spanning the same depth as the bags. Sample preparation and soil chemical analysis One of the two soil cores per cafeteria was used to measure soil chemical characteristics. The core was gently homogenised before a subsample (5 ml) was used to analyse pH as described previously. Another subsample was used to extract ammonium and nitrate as described previously. Picea abies roots (< 2 mm in diameter) were retrieved from the second central soil core and rinsed carefully. Cleaned roots and substrates from ingrowth mesh bags were freeze‐dried and finely ground; soils and roots in a Precellys homogenizer (Bertin Instruments, Montigny la Bretonneux, France); and sand in a ball mill (LMLW‐320/2; Laarmann, Roermond, the Netherlands). Fungal community analysis DNA was extracted from 20 to 50 mg (roots), 75 mg (organic soils: central, southern, southern P fertilised), 250 mg (mull soil) or 500 mg (sand and sand amended with apatite) of material with the NucleoSpin Soil kit (Macherey‐Nagel, Duren, Germany), and extracts were diluted to a DNA concentration of 0.5 ng μl −1 . DNA was also extracted from samples of substrates that were not incubated in the field. Amplicons of the ITS2 region were produced by PCR using the forward primer fITS7 (Ihrmark et al ., ) and the reverse primer ITS4 (White et al ., ) with unique identification tags attached to both primers. Reactions (50 μl) were run with 12.5 ng of DNA template, 0.2 mM dNTP, 0.025 U μl −1 Dreamtaq polymerase (Thermo Fischer Scientific, Waltham, MA, USA) and 0.5 μM of each primer. PCR was performed with denaturation at 94°C, annealing at 56°C and extension at 72°C for 30 s each and cycle numbers (28–35) optimised to ensure that the reaction was in the exponential phase (Castaño et al ., ). Negative controls with deionised H 2 O instead of DNA template were included. A total of 267 ingrowth mesh bag samples, 46 root samples and samples of nonincubated substrates (to characterise background communities) were successfully amplified and cleaned with Sera‐Mag (Cytiva, Marlborough, MA, USA) according to the manufacturer's instructions. Amplicon concentrations were measured fluorometrically on a Qubit (Invitrogen), and the PCR products were merged in equal amounts into four pools and cleaned with the EZNA Cycle Pure Kit (Omega Bio‐Tek., Norcross, GA, USA). Library preparation and sequencing were conducted by SciLifeLab (NGI, Uppsala, Sweden) on the PacBio Sequel I platform (Pacific Biosciences, Menlo Park, CA, USA) in one SMRT cell per pool. The PacBio platform was chosen to minimise biases due to ITS2 length variation (Castaño et al ., ). Raw sequences were filtered and clustered in the bioinformatics pipeline S cata ( https://scata.mykopat.slu.se/ ; Ihrmark et al ., ), accepting sequences with length > 100 bp, mean quality > 20, single base quality > 3, primer sequence similarity > 90% and intact identification tags. Genotypes found only once in the whole data set were removed, and sequences were clustered into species hypotheses (Kõljalg et al ., ) using single‐linkage clustering with 98.5% similarity to the closest neighbour required for sequences to join clusters. Species were identified by comparing representative sequences to the UNITE database (Nilsson et al ., ), and only ectomycorrhizal fungal species hypotheses ( n = 342) were selected for further analyses. Relative abundances of any ectomycorrhizal species (in the total fungal community) found in soil substrates before field incubations (i.e. background levels, 0.6–5.6%; Table ) were subtracted from their relative abundances measured after incubation (31–43%). After this background correction, relative abundances of ectomycorrhizal genera were calculated as their share of the ectomycorrhizal community. We selected 12 genera that were present on roots in at least 10 cafeterias (out of a total of 50) to be included in the statistical analyses. Exploration types were assigned according to Tedersoo & Smith ( ) and the DEEMY database (Agerer & Rambold, ; http://www.deemy.de/ ), and in cases of known intrageneric variation, the exploration type of the dominant species was used. The tested genera and their assigned exploration types, hydrophobicity and expected foraging patterns are listed in Table . Statistical analysis For each ectomycorrhizal genus in each cafeteria, we calculated a log ratio of the relative abundance in ingrowth bags relative to roots (Eqn ) and in soil bags relative to sand bags (Eqn ). Only cafeterias where the genus was present on roots were included in these calculations, with 11–39 cafeterias assessed depending on the genus. (Eqn 1) log mean abundance all bags + μ abundance on roots (Eqn 2) log mean abundance soil bags + μ mean abundance sand bags + μ where μ = 1/(mean sequencing depth × 6), that is the lowest expected relative abundance, was added to avoid zeros. To evaluate whether exploration type was a good predictor for mycelial growth patterns, we used mixed‐effect linear models (the lmertest and lme4 packages) (Bates et al ., ; Kuznetsova et al ., ) in R (v.4.0.3; R Core Team, ) with the log ratios as response variables, exploration type as explanatory variable and genus, and cafeteria nested within site as random factors. Next, the log ratios for each genus were tested to investigate whether individual genera were characterised by little or prolific extraradical mycelial growth (i.e. low or high soil/root ratio) or a preference for soil over inert sand. This was done by testing whether the intercept of mixed‐effect linear models with log ratio as response variable and cafeteria nested within site as random factor was significantly different from zero. P ‐values from all genera‐specific models were corrected for testing of multiple taxa by the Benjamini–Hochberg method (Benjamini & Hochberg, ). We also tested whether different types of substrates recruited different ectomycorrhizal communities, that is if the extraradical mycelia of different genera displayed preference for specific substrates. Substrate effects on community composition at the genus level among soil bags (humus and mull substrates) and among sand bags (sand and apatite amended sand) were evaluated by PERMANOVA (adonis2 function in the vegan package in R; Oksanen et al ., ) with 1000 permutations constrained to samples within each cafeteria. Individual mixed‐effect linear models were applied for specific genera with square root transformed relative abundance (Hellinger transformation) as the response variable, substrate as a fixed factor and cafeteria nested within site as a random factor. P ‐values were corrected for multiple testing as described previously, and genera with P ≤ 0.05 were subjected to Tukey's HSD post hoc tests with the emmeans package (Lenth et al ., ). Graphs were produced with ggplot2 (Wickham et al ., ).
Aiming for a high variation in soil properties, organic topsoil or mull‐rich mineral soil (top 10 cm) was collected in autumn 2017 from four Swedish Picea abies (L.) H. Karst forests. Two soils were collected in central Sweden and two in southern Sweden. One of the southern soils had been subjected to P fertilisation. Some chemical characteristics of the soils are described in more detail in Table . Green parts of mosses and roots coarser than 2 mm in diameter were removed, and the soils were stored in bags at room temperature and in darkness for 17 months to reduce background levels of ectomycorrhizal DNA (Bååth et al ., ). The soils remained moist throughout the storage period. After 17 months, the soils were frozen, ground in a custom‐built freeze‐mill, sieved through 2 mm mesh, soaked in deionised water to wash out soluble nutrients and drained. The soils were dried at 40°C and filled into cylindrical mesh bags (8 cm long, 2.5 cm diameter; 50 μm mesh size; Sintab Product AB, Malmö, Sweden), which allowed ingrowth of fungal mycelium but excluded tree roots (Wallander et al ., ); c . 7 g of organic topsoil and c . 21 g of mull soil were used to fill the bags. Additional bags were filled with either sand ( c . 40 g; 0.36–2.0 mm; 99.6% SiO 2 ; Silversand 90; Sibelco Nordic AB, Västerås, Sweden) or sand mixed with 1% apatite (Krantz, Bonn, Germany; sourced from Madagascar) with a grain size of 0.65–2.00 mm. pH of the soil substrates was measured in a 1 : 5 v/v ratio of soil and deionised H 2 O with an 855 Robotic Titrosampler and an Aquatrode Plus combined pH electrode (Metrohm, Herisau, Switzerland) at collection and after 17 months of preincubation in room temperature. After sieving, washing and drying the preincubated soils, inorganic N concentrations (NH 4 + and NO 3 − ) were measured by extraction in a 1 : 2.5 w/w ratio of soil and 2 M KCl and analysed on an autoanalyser (Bran + Luebbe XY‐2 Sampler; Seal Analytical Inc., Emu Plains, NSW, Australia). Organic matter concentration was determined by loss on ignition at 550°C for 5 h, and C/N was measured in a combustion elemental analyser (TruMac CN; Leco, St Joseph, MI, USA).
Ten mature (> 70 yr) P. abies‐ dominated forests in central Sweden (latitude: 59.2–60.5°N) were selected for field incubation (Table ). Forests with contrasting soil fertility were selected based on visual assessments (e.g. composition of understorey vegetation, contribution of deciduous trees and soil type). More productive sites had moder/mineral soils, some contribution of deciduous trees ( Betula , Populus , Corylus ) and an understorey consisting of mosses ( Rhytidiadelphus sp., Hylocomium splendens , Ptilium crista‐castrensis and Pleurozium schreberi ), grasses and ferns. Less productive sites had podzolised soil with a distinct organic layer (organic topsoil) overlying mineral soil, some contribution of Pinus sylvestris and understorey vegetation consisting of mosses ( Hylocomium splendens and Pleurozium schreberi ) and dwarf shrubs ( Vaccinium myrtillus and Vaccinium vitis ‐ idaea ). Bags were soaked in deionised H 2 O for a couple of minutes and placed in holes made by removing a soil core (2.5 cm in diameter) with a metal soil corer. At sites without a distinct organic layer, bags were placed vertically down to 8 cm depth from the soil surface and at sites with podzol soils, they were placed with the bottom of the bag at the organic–mineral soil interface. The six bags, containing different substrates (Table ), were grouped in five replicate ‘cafeterias’ per site, spaced at least 5 m apart, each containing one bag of each substrate in a circle with 1 m diameter and even spacing of bags (Supporting Information Fig. ). In total, 10 sites × 6 substrates × 5 replicates = 300 bags were incubated. The incubation period lasted 153–160 d (May–November), after which the bags were retrieved from the soil (277 bags were recovered), placed individually in 50 ml tubes and frozen at −20°C within 8 h. At the time of bag collection, two soil cores (3 cm in diameter) were sampled from the middle of each cafeteria, spanning the same depth as the bags.
One of the two soil cores per cafeteria was used to measure soil chemical characteristics. The core was gently homogenised before a subsample (5 ml) was used to analyse pH as described previously. Another subsample was used to extract ammonium and nitrate as described previously. Picea abies roots (< 2 mm in diameter) were retrieved from the second central soil core and rinsed carefully. Cleaned roots and substrates from ingrowth mesh bags were freeze‐dried and finely ground; soils and roots in a Precellys homogenizer (Bertin Instruments, Montigny la Bretonneux, France); and sand in a ball mill (LMLW‐320/2; Laarmann, Roermond, the Netherlands).
DNA was extracted from 20 to 50 mg (roots), 75 mg (organic soils: central, southern, southern P fertilised), 250 mg (mull soil) or 500 mg (sand and sand amended with apatite) of material with the NucleoSpin Soil kit (Macherey‐Nagel, Duren, Germany), and extracts were diluted to a DNA concentration of 0.5 ng μl −1 . DNA was also extracted from samples of substrates that were not incubated in the field. Amplicons of the ITS2 region were produced by PCR using the forward primer fITS7 (Ihrmark et al ., ) and the reverse primer ITS4 (White et al ., ) with unique identification tags attached to both primers. Reactions (50 μl) were run with 12.5 ng of DNA template, 0.2 mM dNTP, 0.025 U μl −1 Dreamtaq polymerase (Thermo Fischer Scientific, Waltham, MA, USA) and 0.5 μM of each primer. PCR was performed with denaturation at 94°C, annealing at 56°C and extension at 72°C for 30 s each and cycle numbers (28–35) optimised to ensure that the reaction was in the exponential phase (Castaño et al ., ). Negative controls with deionised H 2 O instead of DNA template were included. A total of 267 ingrowth mesh bag samples, 46 root samples and samples of nonincubated substrates (to characterise background communities) were successfully amplified and cleaned with Sera‐Mag (Cytiva, Marlborough, MA, USA) according to the manufacturer's instructions. Amplicon concentrations were measured fluorometrically on a Qubit (Invitrogen), and the PCR products were merged in equal amounts into four pools and cleaned with the EZNA Cycle Pure Kit (Omega Bio‐Tek., Norcross, GA, USA). Library preparation and sequencing were conducted by SciLifeLab (NGI, Uppsala, Sweden) on the PacBio Sequel I platform (Pacific Biosciences, Menlo Park, CA, USA) in one SMRT cell per pool. The PacBio platform was chosen to minimise biases due to ITS2 length variation (Castaño et al ., ). Raw sequences were filtered and clustered in the bioinformatics pipeline S cata ( https://scata.mykopat.slu.se/ ; Ihrmark et al ., ), accepting sequences with length > 100 bp, mean quality > 20, single base quality > 3, primer sequence similarity > 90% and intact identification tags. Genotypes found only once in the whole data set were removed, and sequences were clustered into species hypotheses (Kõljalg et al ., ) using single‐linkage clustering with 98.5% similarity to the closest neighbour required for sequences to join clusters. Species were identified by comparing representative sequences to the UNITE database (Nilsson et al ., ), and only ectomycorrhizal fungal species hypotheses ( n = 342) were selected for further analyses. Relative abundances of any ectomycorrhizal species (in the total fungal community) found in soil substrates before field incubations (i.e. background levels, 0.6–5.6%; Table ) were subtracted from their relative abundances measured after incubation (31–43%). After this background correction, relative abundances of ectomycorrhizal genera were calculated as their share of the ectomycorrhizal community. We selected 12 genera that were present on roots in at least 10 cafeterias (out of a total of 50) to be included in the statistical analyses. Exploration types were assigned according to Tedersoo & Smith ( ) and the DEEMY database (Agerer & Rambold, ; http://www.deemy.de/ ), and in cases of known intrageneric variation, the exploration type of the dominant species was used. The tested genera and their assigned exploration types, hydrophobicity and expected foraging patterns are listed in Table .
For each ectomycorrhizal genus in each cafeteria, we calculated a log ratio of the relative abundance in ingrowth bags relative to roots (Eqn ) and in soil bags relative to sand bags (Eqn ). Only cafeterias where the genus was present on roots were included in these calculations, with 11–39 cafeterias assessed depending on the genus. (Eqn 1) log mean abundance all bags + μ abundance on roots (Eqn 2) log mean abundance soil bags + μ mean abundance sand bags + μ where μ = 1/(mean sequencing depth × 6), that is the lowest expected relative abundance, was added to avoid zeros. To evaluate whether exploration type was a good predictor for mycelial growth patterns, we used mixed‐effect linear models (the lmertest and lme4 packages) (Bates et al ., ; Kuznetsova et al ., ) in R (v.4.0.3; R Core Team, ) with the log ratios as response variables, exploration type as explanatory variable and genus, and cafeteria nested within site as random factors. Next, the log ratios for each genus were tested to investigate whether individual genera were characterised by little or prolific extraradical mycelial growth (i.e. low or high soil/root ratio) or a preference for soil over inert sand. This was done by testing whether the intercept of mixed‐effect linear models with log ratio as response variable and cafeteria nested within site as random factor was significantly different from zero. P ‐values from all genera‐specific models were corrected for testing of multiple taxa by the Benjamini–Hochberg method (Benjamini & Hochberg, ). We also tested whether different types of substrates recruited different ectomycorrhizal communities, that is if the extraradical mycelia of different genera displayed preference for specific substrates. Substrate effects on community composition at the genus level among soil bags (humus and mull substrates) and among sand bags (sand and apatite amended sand) were evaluated by PERMANOVA (adonis2 function in the vegan package in R; Oksanen et al ., ) with 1000 permutations constrained to samples within each cafeteria. Individual mixed‐effect linear models were applied for specific genera with square root transformed relative abundance (Hellinger transformation) as the response variable, substrate as a fixed factor and cafeteria nested within site as a random factor. P ‐values were corrected for multiple testing as described previously, and genera with P ≤ 0.05 were subjected to Tukey's HSD post hoc tests with the emmeans package (Lenth et al ., ). Graphs were produced with ggplot2 (Wickham et al ., ).
A total of 903 787 sequences passed the quality check, and after the removal of unique sequences, 466 080 sequences were clustered into 2668 species hypotheses, whereof 342 (272 499 sequences accounting for 0.6–95% of total reads from each sample; mean 46% and median 44%) were ectomycorrhizal and 235 (244 713 sequences accounting for 0.2–95% of total reads from each sample; mean 41% and median 38%) belonged to the 12 most frequently encountered genera (Table ). These genera together accounted for on average 87% of the ectomycorrhizal reads in individual samples (range: 9–100%; median: 97%). Exploration type was not a good predictor of relative colonisation of bags vs roots ( P = 0.7) or soil bags vs sand bags ( P = 0.3; Table ). Amphinema , Tomentella and Tylospora had a high ratio of extraradical mycelium to root‐associated mycelium and were, thus, prolific bag colonisers, despite being of different exploration types. By contrast, Hyaloscypha , Hygrophorus, Cortinarius and Piloderma were more abundant on roots than as extraradical mycelium, although the two latter genera are of medium‐distance fringe type and, thus, expected to proliferate far into the soil (Tables ). Lactarius , Russula, Amanita and Piloderma showed a preference for soil bags, whereas Amphinema was more prolific in sand bags. Furthermore, Lactarius and Russula occurred as abundantly in ingrowth bags as on roots, despite being of the contact type (Fig. ; Tables ). Community composition on the genus level differed between different types of soil bags ( P = 0.001; Table ), with three genera displaying a preference for some of the soil substrates; Amphinema and Tomentella were less abundant in the southern soils, where Cenococcum was more prolific (Fig. ; Tables ). Apatite‐amended bags did not diverge in community composition from nonamended sand ( P = 0.4; Tables ).
All in all, we found little support for the utility of exploration types to predict patterns of extraradical mycelial foraging. Contrary to our hypothesis, contact‐type genera did not generally colonise bags to a lesser extent than others (Fig. ); only the ascomycete genus Hyaloscypha behaved as expected for a contact type and colonised roots more extensively than ingrowth bags. The other two contact‐type genera, Russula and Lactarius , preferably colonised soil substrates over sand (Fig. ), which may explain why Kjøller ( ), who used sandbags, concluded that these genera have limited mycelial proliferation. The detection of abundant DNA of some contact‐type genera in soil substrates suggests that they (Russulaceae) may colonise the soil matrix with extraradical mycelium more extensively than proposed by Agerer ( ), in line with the observation that Lactarius rufus was present in bulk soil without being detected on ectomycorrhizal root tips (Genney et al ., ). Presumably, contact types may extend from the roots with fine emanating hyphae that are not readily visible and selectively target organic hotspots. Amanita (medium‐distance smooth) behaved similarly. We hypothesised that the extraradical mycelium of the hydrophilic short‐distance and medium‐distance smooth exploration types would expand throughout the soil without preference for any particular substrate in a space‐filling manner, and, thus, be prolific in all types of ingrowth bags. Tomentella (medium‐distance smooth) and Tylospora (short‐distance) were indeed more abundant in bags than on roots. However, the short‐distance type Hygrophorus was, by contrast, mainly found on roots. Contrary to our expectations, Piloderma and Cortinarius , which both mainly form medium‐distance fringe type mycorrhizas, did not proliferate extensively in the ingrowth bags, despite being widely considered to produce large amounts of extraradical mycelial biomass (Hobbie & Agerer, ). On the contrary, Amphinema (also medium‐distance fringe) colonised bags vigorously, despite its scarce representation on root tips. Furthermore, we did not find consistent support for the hypothesis that hydrophobic, cord‐forming genera would prefer organic substrates over sand bags; Piloderma was, as expected, more abundant in soil bags, but Cortinarius had no significant preference. Amphinema was even preferentially found in sand bags, potentially being outcompeted (or diluted) by more selective foragers in soil bags (Fig. ). Amphinema, Tomentella and Cenococcum differed significantly in relative abundance between ingrowth bags with different types of soil. Although the underlying mechanism is not clear, this observation suggests that some ectomycorrhizal fungi can detect and selectively direct extraradical mycelial growth towards specific substrates, and/or that mycelial colonisation may be confined to specific micro‐niches by local environmental filtering (Rosling et al ., ). However, contrary to our hypothesis, short‐distance and medium‐distance smooth types, which are supposedly nitrophilic (Moeller et al ., ; Sterkenburg et al ., ; Defrenne et al ., ; Pellitier & Zak, ), did not preferably colonise the mull soil, which had the lowest C : N ratio and highest inorganic N mineralisation (Table ). The lack of community response to apatite amendment in sand bags is concordant with the results of Hedh et al . ( ), possibly indicating a low demand for mineral‐bound P or large functional redundancy in terms of weathering. Although we conclude that exploration types are not consistent predictors of soil foraging, we observed systematic differences among genera regarding their extraradical mycelial proliferation in different substrates. Tedersoo et al . ( ) claimed that ectomycorrhizal lineage is a better predictor of functional attributes than exploration type. However, we also observed contrasting patterns within lineages (e.g. Piloderma vs Tylospora and Amphinema in the Athelioid lineage). Piloderma and Cortinarius have been highlighted as having high extraradical biomass, but we rather observed low extraradical proliferation (DNA in the bags) of these genera. Still, these genera often dominate ectomycorrhizal fungal communities and attain a high biomass in old, nutrient‐limited boreal forests (Sterkenburg et al ., ; Kyaschenko et al ., ). Accumulation of extraradical mycelial biomass does not depend solely on growth rate, but also on biomass turnover (Clemmensen et al ., , ; Ekblad et al ., ; Hagenbo et al ., ). Species with slow turnover of extraradical hyphae, for example by forming long‐lived cords (Treseder et al ., ), may attain high biomass over a long period of time in spite of slow growth. Here, we studied colonisation of bags during only one growing season while Hagenbo et al . ( ) studied successional colonisation of bags over a longer period and observed that Cortinarius progressively increased over multiple years, suggesting slow but persistent net accumulation of perennial mycelial biomass. Furthermore, Piloderma selectively colonised soil substrates with organic resources, while Cortinarius did not display such preference. Members of Cortinarius are known for their capacity to decompose and derive nutrients from complex organic substrates (Bödeker et al ., ; Lindahl et al ., ). These two genera are also recognised as nitrophobic (Lilleskov et al ., ; van der Linde et al ., ), as is Hygrophorus (Solly et al ., ). By contrast, the genera that proliferated extensively in soil bags, relative to their more scarce representation on roots, that is Amphinema , Tylospora and Tomentella , have often been described as nitrophilic (Kranabetter et al ., ; Sterkenburg et al ., ; Hagenbo et al ., ), at least in a boreal context with low nitrogen deposition. Amphinema was also more abundant in sand bags than in soil bags, suggesting a space‐filling growth strategy. Ample production of extraradical mycelium may be advantageous at high levels of mobile, inorganic nutrients, by minimising leaching and retaining nutrients in the mycorrhizal system (Hobbie & Agerer, ; Bahr et al ., ). However, Amphinema has hydrophobic cords, implying that this growth strategy is not restricted to noncord‐forming, hydrophilic exploration types. As exploration types do not seem to be consistent predictors of mycelial foraging, we see a need for alternative frameworks, for example based on nitrophobicity and/or hydrophobicity (Almeida et al ., ). Most low‐proliferating taxa in our study are recognised as nitrophobic, hydrophobic and linked to exploitation of solid organic resources in nutrient‐limited environments. A long mycelial lifespan may enable them to accumulate high biomass over time (Hagenbo et al ., ) in spite of scarce resource availability. The high‐proliferating taxa, on the contrary, are relatively nitrophilic/hydrophilic and may rather employ a space‐filling strategy to minimise losses of soluble inorganic nutrients under rich conditions. These coordinated traits are likely to be continuously distributed along a trait axis (van der Linde et al ., ) similar to the leaf economics spectrum of plants (Wright et al ., ). Further characterisation of trait axes will facilitate understanding of ecological niches, plasticity and adaptations of ectomycorrhizal fungal species along environmental gradients. To this end, more studies are needed to assess ectomycorrhizal foraging patterns in other types of environments, ideally also with more directly quantitative methods, as relative DNA abundances are only indirectly linked to mycelial biomass.
KJ, KC, HW and BL designed the study. KJ collected the data and performed the analyses. KJ wrote the first draft of the manuscript. All authors contributed to the interpretation and writing.
Fig. S1 Illustration of cafeteria setup. Table S1 Model output from lme‐models of the log‐ratio of ectomycorrhizal exploration types in bags relative to roots, and in soil bags relative to sand bags. Table S2 Relative abundance of ectomycorrhizal fungi on roots, and sand‐ or soil‐filled ingrowth bags. Table S3 Model output from lme‐models of log‐ratio of ectomycorrhizal genera in bags relative to roots. Table S4 Model output from lme‐models of log‐ratio of ectomycorrhizal genera in soil‐filled relative to sand‐filled ingrowth meshbags. Table S5 Model output from PERMANOVA testing the effect of different soil substrates on ectomycorrhizal fungal community composition in ingrowth mesh bags. Table S6 Relative abundance of ectomycorrhizal fungi in ingrowth bags filled with different organic substrates. Table S7 Model output from models testing the effect of substrates in ingrowth meshbags on ectomycorrhizal fungal genera. Table S8 Output from post hoc Tukey HSD test on ectomycorrhizal genera that displayed preference towards any soil substrate. Table S9 Model output from PERMANOVA testing the effect of different sand substrates on ectomycorrhizal fungal community composition in ingrowth mesh bags. Table S10 Relative abundance of ectomycorrhizal fungi in ingrowth bags filled with sand. Please note: Wiley is not responsible for the content or functionality of any Supporting Information supplied by the authors. Any queries (other than missing material) should be directed to the New Phytologist Central Office. Click here for additional data file.
|
Connections between postparotid terminal branches of the facial nerve: An immunohistochemistry study
|
c0e3f108-92b5-4c25-8719-447a014a7595
|
10098607
|
Anatomy[mh]
|
INTRODUCTION Connections between the five terminal branches of the facial nerve (cranial nerve [cn] VII) have been described since the second half of the nineteenth century. They constitute a structure with multiple connected branches called the “subparotid plexus” (Hovelaque, ; Sappey, ) or “parastenon plexus” (Pons‐Tortella, ). Most authors have used this description and it has served as a basis for different proposed facial nerve classifications (Alomar, ; Bernstein & Nelson, ; Davis et al., ; Katz & Catalano, ; Kitamura & Yamazaki, ; Martínez Pascual et al., ; Myint et al., ; Tzafetta & Terzis, ). Connections between facial nerve branches have been found more frequently in the temporofacial division (TF). This has been attributed to its greater number of branches and its plexiform nature; the cervicofacial division (CF) supplies fewer branches, so connections between them are less common (Davis et al., ; Diamond et al., ; Lineaweaver et al., ; Martínez Pascual et al., ; Pons‐Tortella, ; Salame et al., ; Tansatit et al., ). It has been assumed that the fibers within these connections are motor because the five terminal branches of the facial nerve supply the mimic muscles, and the sensory innervation of the face depends on the trigeminal nerve (cn V) (Shoja et al., ). However, branches of cn VII also communicate with terminal ramifications of cn V in the face: the auriculotemporal (Kwak et al., ; Namking et al., ; Tansatit et al., ), supraorbital (Hwang et al., ; Li et al., ; Martínez Pascual et al., ), infraorbital (Hu et al., ; Hwang et al., ; Martínez Pascual et al., ; Tansatit et al., ), and mental (Hwang et al., ; Kim et al., ; Martínez Pascual et al., ; Touré et al., ) nerves, or the well‐known connection between the lingual nerve and the chorda tympani conveying the sense of taste (Dixon, ; Kwak et al., ; Hwang et al., ; Diamond et al., ; Shoja et al., ; Takezawa & Kageyama, ). Different types of fibers have been proposed to constitute the VII‐V connections: autonomic (Bowden & Mahran, ; Lewy et al., ; Tansatit et al., ), motor (Conley, ; Martin & Helsper, ; Odobescu et al., ) or sensory (Baumel, ; Cobo, Abbate, et al., ; Cobo, Solé‐Magdalena, et al., ; Odobescu et al., ; Yang et al., ). Thus, non‐motor fibers of VII‐V connections could continue to travel through the postparotid facial connections, so not all the fibers inside those connections are necessarily motor type. However, all these studies of the VII‐VII and V‐VII connections are based on anatomical dissection, which does not reveal the real functions of the component fibers. Therefore, the goal of our study is to determine, using specific immunohistochemical techniques, whether the connections between the terminal branches of the facial nerve are purely motor or whether they also carry other types of fiber.
MATERIALS AND METHODS The study was carried out on 13 hemiheads from embalmed adult Caucasian bodies (seven men, five women) from the Body Donations and Dissecting Rooms Centre of the Complutense University of Madrid. The average age of the cadavers was 83 years (range 75–90 years) at the time of death. The authors state that every effort was made to follow all local and international ethical guidelines and laws that pertain to the use of human cadaveric donors in anatomical research (Iwanaga et al., ). 2.1 Microdissection The extrapetrous course of 13 facial nerves (seven right, six left) was dissected from proximal to distal using microsurgical forceps and scissors with the help of surgical loupes (2.5x) (Optimedic®). The facial nerve was dissected and the postparotid terminal facial‐facial connections were identified, sectioned, and extracted for processing. 2.2 Immunohistochemistry Immunohistochemistry was performed in the Section of Anatomy from Department of Neuroscience at the University of Padova and the Department of Immunology, Ophthalmology and ENT at the Complutense University School of Medicine in Madrid. The connections were processed and embedded in paraffin blocks. Transverse sections (6 μm thickness) were cut with a microtome and mounted on slides, deparaffinized, and rehydrated before staining. Antigen retrieval was performed with sodium citrate (pH 6.1) for 20 min at 95°C. The sections were then washed in phosphate‐buffered saline (PBS) and placed in 1% hydrogen peroxide (H 2 O 2 ) in PBS for 10 min at room temperature, and then put into blocking buffer (0.2% bovine serum albumin in PBS) for 1 h at room temperature. After washing with PBS, they were incubated for 24 h at 4°C with the primary antibody anti‐choline acetyl transferase (ChAT) (Gene Tex® [N1N3]; 1:800). A negative control was performed for each different sample. The slides were washed with PBS and incubated with secondary antibody (goat anti‐rabbit, Jackson Immunoreserch®; 1:300) in blocking buffer for 1 h at room temperature. After further washing, color was developed with DAKO chromogen for 30 s. The samples were washed with distilled water and finally counterstained with hematoxylin, dehydrated and mounted. 2.3 Images analysis The immunohistochemical images were photographed under a microscope (Nikon E800M). Motor axons (ChAT positive) (Figure ) were counted in every sample using ImageJ Fiji 1.52p software (National Institutes of Health®) with 20× magnification. The connections were classified on the basis of number of motor fibers: strongly positive (>75% ChAT+ fibers), intermediately positive (50%–75% ChAT+ fibers) and weakly positive (< 50% ChAT+ fibers) (Figures ). 2.4 Statistical analysis Both descriptive and analytical statistics were used; percentages, means, ranges, and standard deviations were collected and compared. A one‐factor experimental design was used to detect significant differences in the average number of fibers with respect to side. The Kolmogorov–Smirnov Test was used to determine the normality of the underlying data distribution. A significance level alpha = 0.05 was used for all tests. SPSS software version 22 (IBM Corporation, Armonk) was used for the analyses.
Microdissection The extrapetrous course of 13 facial nerves (seven right, six left) was dissected from proximal to distal using microsurgical forceps and scissors with the help of surgical loupes (2.5x) (Optimedic®). The facial nerve was dissected and the postparotid terminal facial‐facial connections were identified, sectioned, and extracted for processing.
Immunohistochemistry Immunohistochemistry was performed in the Section of Anatomy from Department of Neuroscience at the University of Padova and the Department of Immunology, Ophthalmology and ENT at the Complutense University School of Medicine in Madrid. The connections were processed and embedded in paraffin blocks. Transverse sections (6 μm thickness) were cut with a microtome and mounted on slides, deparaffinized, and rehydrated before staining. Antigen retrieval was performed with sodium citrate (pH 6.1) for 20 min at 95°C. The sections were then washed in phosphate‐buffered saline (PBS) and placed in 1% hydrogen peroxide (H 2 O 2 ) in PBS for 10 min at room temperature, and then put into blocking buffer (0.2% bovine serum albumin in PBS) for 1 h at room temperature. After washing with PBS, they were incubated for 24 h at 4°C with the primary antibody anti‐choline acetyl transferase (ChAT) (Gene Tex® [N1N3]; 1:800). A negative control was performed for each different sample. The slides were washed with PBS and incubated with secondary antibody (goat anti‐rabbit, Jackson Immunoreserch®; 1:300) in blocking buffer for 1 h at room temperature. After further washing, color was developed with DAKO chromogen for 30 s. The samples were washed with distilled water and finally counterstained with hematoxylin, dehydrated and mounted.
Images analysis The immunohistochemical images were photographed under a microscope (Nikon E800M). Motor axons (ChAT positive) (Figure ) were counted in every sample using ImageJ Fiji 1.52p software (National Institutes of Health®) with 20× magnification. The connections were classified on the basis of number of motor fibers: strongly positive (>75% ChAT+ fibers), intermediately positive (50%–75% ChAT+ fibers) and weakly positive (< 50% ChAT+ fibers) (Figures ).
Statistical analysis Both descriptive and analytical statistics were used; percentages, means, ranges, and standard deviations were collected and compared. A one‐factor experimental design was used to detect significant differences in the average number of fibers with respect to side. The Kolmogorov–Smirnov Test was used to determine the normality of the underlying data distribution. A significance level alpha = 0.05 was used for all tests. SPSS software version 22 (IBM Corporation, Armonk) was used for the analyses.
RESULTS A total of 17 VII‐VII connections were analyzed. These connections were more frequent on the left side (10 connections) than on the right (seven). Nine of the connections were in male specimens while the other eight were in female cadavers. Tables and show the distribution of connections by side and sex respectively. The different types of connections are reported first, followed by a global overview. 3.1 Temporo‐temporal A temporo‐temporal (tt) connection was found in one case. It had 84 fibers, 76.2% (64/84) of them ChAT positive (ChAT+) and the other 23.8% (20/84) ChAT negative (ChAT−). Therefore, this connection was strongly positive (Figure ). 3.2 Temporo‐zygomatic Two temporo‐zygomatic (tz) connections were found. The number of fibers in them totalled 469, and the average was 234 (range 134–335); 46.1% (216/469) were ChAT+ while 53.9% (253/469) were ChAT−. Both tz connections were weakly positive (Figure ). 3.3 Zygomatic‐zygomatic There were two zygomatic‐zygomatic (zz) connections. The number of fibers in them totalled 685, and the average was 343 (259–426); 69.8% (478/685) were ChAT+ and 30.2% (207/685) were ChAT−. One connection was classed as strongly positive while the other was intermediately positive (Figure ). 3.4 Zygomatic‐buccal The zygomatic‐bucal (zb) connection was the most frequent, being found in six cases. Five of the buccal branches (b) arose from the TF division and one from the CF division. The total number of fibers was 1871 and the average was 312 (144–587); 53.9% (1009/1871) were ChAT+ and 46.1% (862/1871) were ChAT−. Five were intermediately positive and the other was weakly positive (Figure ). 3.5 Bucco‐buccal We found just one bucco‐buccal (bb) connection. Both b branches belonged to the TF division. It had 115 fibers, 20% (23/115) of them ChAT− and 80% (92/115) ChAT+, so it was a strongly positive connection (Figure ). 3.6 Bucco‐mandibular There was a bucco‐mandibular (bm) connection in three facial nerves. Two b branches came from the TF division and the other from the CF. The total number of fibers was 858 and the average was 286 (259–322), 68.1% (584/858) of them ChAT+ and 31.9% (274/858) ChAT−. Two connections were intermediately positive while the other was strongly positive (Figure ). 3.7 Mandibulo‐cervical There was a mandibulo‐cervical (mc) connection in two facial nerves. These had the highest number of fibers, averaging 399 (232–566) with a total number of 798, 32.7% (261/798) being ChAT− and 67.3% (537/798) ChAT+. One connection was strongly positive and the other intermediately positive (Figure ). 3.8 Global view The average number of fibers in the VII‐VII connections overall was 287 with a standard deviation of 145.46 (range 84–587), and the average proportion of positive fibers was 63% with a standard deviation of 15% (range 37.7%–91.5%). The average number of fibers on the left side was 319 with a standard deviation of 146.49 (range 134–587), and the average proportion of positive fibers was 58.9% with a standard deviation of 17.2% (range 37.6%–91.5%). In contrast, the average number of fibers in the right side was 241 with a standard deviation of 141.5 (range 84–430), and the average proportion of positive fibers was 68.9% with a standard deviation of 9.82% (range 55.6%–80%) (Figure ). The average number of fibers in males was 302 with a standard deviation of 164.81 (range 84–587) and the average proportion of positive fibers was 61.21% with a standard deviation of 14.03% (range 43.3%–83.2%). The average number of fibers in females was 270 with a standard deviation of 129.3 (range 115–566) and the average proportion of positive fibers was 64.9% with a standard deviation of 17.1% (range 37.6%–91.5%) (Figure ). The distributions of both ChAT+ and ChAT− fibers was normal (Kolmogorov–Smirnov Test p ‐values = 0.821 and 0.871, respectively) and the ANOVA table associated with the one factor experimental design corroborated the hypothesis of equity between the average numbers of positive fibers by side ( p ‐value = 0.429) and of negative fibers by side ( p ‐value = 0.273). Similar results were obtained for sex ( p ‐values = 0.926 for ChAT+ fibers and 0.483 for ChAT− fibers). Therefore, there were no statistically significant side or sex differences. Strongly positive ChAT+ connections were found in 29% of the nerves in the sample (five cases in 17), intermediately positive in 52.94% (nine cases in 17) and weakly positive in 17.65% (three cases in 17).
Temporo‐temporal A temporo‐temporal (tt) connection was found in one case. It had 84 fibers, 76.2% (64/84) of them ChAT positive (ChAT+) and the other 23.8% (20/84) ChAT negative (ChAT−). Therefore, this connection was strongly positive (Figure ).
Temporo‐zygomatic Two temporo‐zygomatic (tz) connections were found. The number of fibers in them totalled 469, and the average was 234 (range 134–335); 46.1% (216/469) were ChAT+ while 53.9% (253/469) were ChAT−. Both tz connections were weakly positive (Figure ).
Zygomatic‐zygomatic There were two zygomatic‐zygomatic (zz) connections. The number of fibers in them totalled 685, and the average was 343 (259–426); 69.8% (478/685) were ChAT+ and 30.2% (207/685) were ChAT−. One connection was classed as strongly positive while the other was intermediately positive (Figure ).
Zygomatic‐buccal The zygomatic‐bucal (zb) connection was the most frequent, being found in six cases. Five of the buccal branches (b) arose from the TF division and one from the CF division. The total number of fibers was 1871 and the average was 312 (144–587); 53.9% (1009/1871) were ChAT+ and 46.1% (862/1871) were ChAT−. Five were intermediately positive and the other was weakly positive (Figure ).
Bucco‐buccal We found just one bucco‐buccal (bb) connection. Both b branches belonged to the TF division. It had 115 fibers, 20% (23/115) of them ChAT− and 80% (92/115) ChAT+, so it was a strongly positive connection (Figure ).
Bucco‐mandibular There was a bucco‐mandibular (bm) connection in three facial nerves. Two b branches came from the TF division and the other from the CF. The total number of fibers was 858 and the average was 286 (259–322), 68.1% (584/858) of them ChAT+ and 31.9% (274/858) ChAT−. Two connections were intermediately positive while the other was strongly positive (Figure ).
Mandibulo‐cervical There was a mandibulo‐cervical (mc) connection in two facial nerves. These had the highest number of fibers, averaging 399 (232–566) with a total number of 798, 32.7% (261/798) being ChAT− and 67.3% (537/798) ChAT+. One connection was strongly positive and the other intermediately positive (Figure ).
Global view The average number of fibers in the VII‐VII connections overall was 287 with a standard deviation of 145.46 (range 84–587), and the average proportion of positive fibers was 63% with a standard deviation of 15% (range 37.7%–91.5%). The average number of fibers on the left side was 319 with a standard deviation of 146.49 (range 134–587), and the average proportion of positive fibers was 58.9% with a standard deviation of 17.2% (range 37.6%–91.5%). In contrast, the average number of fibers in the right side was 241 with a standard deviation of 141.5 (range 84–430), and the average proportion of positive fibers was 68.9% with a standard deviation of 9.82% (range 55.6%–80%) (Figure ). The average number of fibers in males was 302 with a standard deviation of 164.81 (range 84–587) and the average proportion of positive fibers was 61.21% with a standard deviation of 14.03% (range 43.3%–83.2%). The average number of fibers in females was 270 with a standard deviation of 129.3 (range 115–566) and the average proportion of positive fibers was 64.9% with a standard deviation of 17.1% (range 37.6%–91.5%) (Figure ). The distributions of both ChAT+ and ChAT− fibers was normal (Kolmogorov–Smirnov Test p ‐values = 0.821 and 0.871, respectively) and the ANOVA table associated with the one factor experimental design corroborated the hypothesis of equity between the average numbers of positive fibers by side ( p ‐value = 0.429) and of negative fibers by side ( p ‐value = 0.273). Similar results were obtained for sex ( p ‐values = 0.926 for ChAT+ fibers and 0.483 for ChAT− fibers). Therefore, there were no statistically significant side or sex differences. Strongly positive ChAT+ connections were found in 29% of the nerves in the sample (five cases in 17), intermediately positive in 52.94% (nine cases in 17) and weakly positive in 17.65% (three cases in 17).
DISCUSSION Previous studies of VII‐VII connections were based on anatomical dissection, so the real functions of those fibers could not be established. No other immunohistochemical studies of the types of fibers in the postparotid facial connections in humans were found in the literature studied, so our results cannot be compared with others. ChAT antibody has been proved specific/selective for motor axons in peripheral nerves (Castro et al., ; Courties et al., ; Kim et al., ; Lago et al., ; Lago & Navarro, ; Zhou et al., ), even after nerve injury (Castro et al., ; Kim et al., ; Lago et al., ; Lago & Navarro, ; Zhou et al., ). Therefore, it was chosen in this study to establish whether or not the VII‐VII connections are exclusively motor. The results showed that every connection had ChAT+ and ChAT– fibers, so they all contained both motor and non motor fibers. The connection with the highest percentage of ChAT− fibers was the temporo‐zygomatic (53.9%), while the bucco‐buccal had the highest percentage of ChAT+ fibers (80%). This could be because the buccal branches innervate the middle and lower facial thirds, where there are more muscles (Le Louarn, ) than in the upper third (Abramo, ; Abramo et al., ); bucco‐buccal connections could therefore carry a greater motor axonal load. Connections between the sensory terminal ramifications of cn V in the face and the terminal branches of the extrapetrous cn VII have already been described with the auriculotemporal (Kwak et al., ; Namking et al., ; Tansatit et al., ), supraorbital (Hwang et al., ; Li et al., ; Martínez Pascual et al., ), infraorbital (Hu et al., ; Hwang et al., ; Martínez Pascual et al., ; Tansatit et al., ), and mental (Hwang et al., ; Kim et al., ; Martínez Pascual et al., ; Touré et al., ) nerves. The nature of those connections has been discussed by many authors. Some believe they can be autonomic (Bowden & Mahran, ; Lewy et al., ; Tansatit et al., ), some motor (Conley, ; Martin & Helsper, ; Odobescu et al., ) and some sensory (Baumel, ; Cobo, Solé‐Magdalena, et al., ; Odobescu et al., ; Yang et al., ). Thus, assuming that these fibers continue to travel from the V‐VII connection through the rest of the extrapetrous facial nerve, it follows that the terminal branches of the facial nerve and their connections also have non‐motor fibers (Cattaneo & Pavesi, ; Cobo, Solé‐Magdalena, et al., ). Facial muscles typically lack proprioceptors, and facial proprioceptive impulses travel via branches of the trigeminal nerve to the central nervous system (Cattaneo & Pavesi, ; Cobo, Solé‐Magdalena, et al., ). Those propioceptive fibers from the trigeminal nerve in facial‐trigeminal connections could also be conveyed in VII‐VII connections and could innervate sensory structures in facial muscles that substitute for typical muscle spindles in facial proprioception (Cattaneo & Pavesi, ; Cobo, Abbate, et al., ). These findings could explain some of the feelings experienced by a patient with facial palsy after partial recovery, such as painful pressure at some points of the face (e.g. the zygomatic muscles, chin, forehead) (Valls‐Solé, ). Furthermore, muscle mass contraction or synkinesis, both observed in facial palsy patients, could also be related to this mixture of fibers traveling through the multiple connections in the face (Raslan et al., ; Valls‐Solé, ). Indeed, some authors explain synkinesis in terms of activation of latent nervous circuits pre‐existing in the healthy subject (Ton Van & Giot, ). We believe that these preexisting neural circuits can be conveyed by VII‐VII connections. Therefore, we can affirm that the nerve fibers traveling inside the postparotid terminal facial branch connections are not exclusively motor. The nature of the fibers that do not stain for ChAT still needs to be studied using antibodies specific/selective for different types of sensory and autonomic nerves.
|
An unusual blunt force trauma pattern and mechanism to the cranial vault: Investigation of an atypical infant homicide
|
7c0c92b5-1657-4ed2-893f-95f295132fe7
|
10098721
|
Forensic Medicine[mh]
|
INTRODUCTION Differentiating accidental from inflicted injury is a principal element of medico‐legal death investigations in cases of infant mortality and suspected child abuse. Non‐ambulatory infants with unexplained fracture(s) upon postmortem examination often prompt suspicions of inflicted injury. Fractures in these cases require a differential diagnosis between child abuse, accidental injury, birth trauma, and natural disease. Determination of abuse rarely hinges upon the presence of a single fracture or fracture type, rather, in cases of suspected inflicted injury, forensic investigators must consider contextual information to understand the proximate cause. Clinical literature provides guidelines for evaluating suspected cases of child abuse including assessing the developmental stage of the child, the full scope of injuries, their severity, relative degrees of healing, and medical history including the reported mechanism of injury [ , , ]. Suspicions for child maltreatment are raised if the child is a preambulatory infant with any injury, there are injuries to multiple organ systems, different stages of healing are present, patterned injuries are observed, there are injuries to unusual locations like the torso, ears, face, neck or upper arm, there are unexplained serious injuries, or there is evidence of neglect . Additionally, if no explanation or only a vague explanation of the cause of the injury is provided, a history which is implausible for the child's physical or developmental stage, a delay in seeking medical care, inconsistencies in caretaker or witness accounts of the injury event, or an explanation that is inconsistent with the pattern, severity, or age of the injury could also be indicators of child abuse . An important part of the investigation is determining whether the caretaker's explanation of the traumatic event is consistent with the injuries observed. This requires an evaluation of the fracture pattern to estimate the trauma mechanism or the correlation of the fracture pattern with a known injury mechanism. The injury context and fracture pattern interpretation are especially important for cranial fractures as they are common injuries in children for both accidental and non‐accidental circumstances and have a lower specificity for abuse than other injuries, such as rib fractures and metaphyseal fractures [ , , , , ]. The percentage of pediatric cranial fractures attributed to abuse is relatively low compared to cranial fractures resulting from accidental injury [ , , , ]. However, the association between a cranial fracture and abuse dramatically increases in infants and young children. According to clinical studies, approximately one in three infants and toddlers with cranial fractures are victims of inflicted injury [ , , ]. These studies largely reflect patterns of abusive cranial trauma in living children. The frequency of cranial fractures in cases where the child died due to abusive activity remains unknown; however, it is estimated that abusive head trauma in children less than a year old has an annual incidence of 33 to 38 cases per 100,000 children and nearly 25% of those cases result in death . Several studies have found that the presence of multiple cranial fractures and variable degrees of healing is more suggestive of abuse [ , , , , ], rather than the complexity of the fracture as was previously thought . Understanding fracture patterns in forensic cases is difficult as few controlled studies of cranial impacts have been completed for human skeletal material. While there are some experimental studies on fracture initiation and propagation in the human cranium [ , , , , , , , , , ], few address the structural or biomechanical differences of the developing infant head while others have utilized immature porcine models as a proxy for the infant cranium [ , , , , ]. These studies have demonstrated significant differences in the biomechanical properties of infant bone and sutures compared to adults and have shown that the age of the child , impact energy , impact surface [ , , , , ], head entrapment , and impact shape all influence the degree and pattern of fracturing. To supplement the limited number of controlled experiments, retrospective clinical studies and case reports are often used to inform fracture pattern interpretation in subadult individuals. Unfortunately, in cases of suspected abuse, the trauma mechanism is often unknown or unsubstantiated [ , , , ]. This report presents a case of an infant who died with acute and remote injuries indicative of an abusive history, including evidence of asphyxia, an unusual pattern of multiple cranial fractures, and multiple metaphyseal and diaphyseal fractures of the left arm. Subsequently, the perpetrator admitted to a trauma mechanism which may explain the atypical cranial fracture pattern. The findings of this case expand the spectrum of abusive head trauma mechanisms in infants, provide a pattern of skeletal trauma indicative of abuse potentially resulting from bilateral compression of the cranium, and provide radiographic, gross, and histological evidence of the fractures resulting from inflicted injury. Furthermore, this case report highlights the need for further research into patterns of blunt force trauma in the infant cranium and the development of methods for fracture age estimation.
CASE REPORT A four‐month‐old male infant was found unresponsive in a crib in the early morning by the infant's mother. The mother attempted cardiopulmonary resuscitation (CPR) and law enforcement were called to the residence. Upon arrival, police continued CPR while the infant was transported to the hospital where he subsequently died. 2.1 Medical history The infant was born vaginally at 33‐weeks gestational age with no recorded complications or medical interventions associated with the delivery. The child was hospitalized for prematurity. Both the infant and the mother tested positive for tetrahydrocannabinol (THC) following birth and a caseworker with Child Protective Services (CPS) was assigned to the family. A cranial ultrasound was performed 11 days after birth and was normal with no evidence for traumatic brain injury or cranial fractures. The infant was released from the hospital 25 days after birth and was under the care of his parents for 3 months until his death. 2.2 Autopsy findings The postmortem radiographic skeletal survey revealed multiple fractures to the neurocranium and radiographic evidence consistent with metaphyseal fractures of the left humerus, ulna, and radius (Figure ). External examination of the remains indicated florid petechial hemorrhage of the conjunctiva and face, most notable around the right eye. Two blue‐purple contusions, measuring 1.2 and 0.8 centimeters, were observed on the left arm. According to World Health Organization growth charts, the infant's body weight at the time of postmortem examination was less than the 5th percentile, the crown‐heel length was in the 10th percentile, and the head circumference was less than the 5th percentile for a 10‐week adjusted age male. Internal examination revealed hemorrhagic cerebrospinal fluid, two areas of hemorrhage in the soft tissues of the left and right fronto‐parietal regions of the scalp, and corresponding hemorrhages in the soft‐tissues adherent to the skull underlying the scalp hemorrhages (Figure ). Multiple cranial fractures extending from the areas of soft tissue hemorrhage were also observed. Scant amounts of subarachnoid hemorrhage were noted on the brain. The brain was retained for neuropathological analysis. The skull cap and elements of the left arm were resected and submitted for anthropological analysis. 2.3 Neuropathology findings Consistent with the forensic pathology findings, neuropathological analysis revealed focal subacute subarachnoid hemorrhages in the right frontal parasagittal region and left inferior parietal lobule of the brain, approximately 1 centimeter in diameter and less than 1 centimeter in diameter, respectively. There were no other observed abnormalities, pathological conditions of the brain tissues, or retinal hemorrhages. 2.4 Anthropological examination The skull cap was radiographed (Figure ) and photographed (Figure ) prior to histological sampling and maceration, revealing numerous fractures. Histological samples were resected from three cranial fractures prior to processing (Figure ). The calvarium, left humerus, left radius, and left ulna were macerated in an incubator over a two‐week period with manual removal of the soft tissues. The left parietal demonstrated a comminuted, slightly depressed defect (Fracture 1) with four associated simple, linear, radiating fractures (Fractures 2 through 5), all with rounded fracture margins and extensive subperiosteal new bone formation (Figure ). The majority of the Fracture 1 defect had no remaining fracture gap and the inferior margin was nearly obliterated. While Fracture 2 extended between Fracture 1 and the anterior left parietal, Fractures 3 through 5 extended to the sagittal suture. Also evident in the left parietal region was a curvilinear fracture (Fracture 6) and linear fracture (Fracture 7) adjacent to the craniotomy cut (Figure ). Since only the cranial vault was available for anthropological evaluation, the extent, number, and location of terminal points of these fractures could not be evaluated, but rounded fracture margins and subperiosteal new bone formation associated with Fracture 7 provided evidence of healing. The right parietal exhibited a simple fracture (Fracture 8) with an unusual morphology, changing direction multiple times in a stair‐step pattern that extended between the sagittal suture and the craniotomy cut. Fracture 9, a complex linear fracture with branching (Fracture 10) extended between the sagittal suture with its other terminus in the right parietal. A simple, linear fracture (Fracture 11) connected Fractures 8 and 9. Fracture 12 is a simple curvilinear fracture which extends between Fracture 8 and the intersection of the lambdoid suture and craniotomy cut (Figure ). Fracture 13 is a simple, curvilinear fracture that extends between Fracture 12 and the lambdoid suture (Figure ). The final cranial fracture observed (Fracture 14) was a simple, linear fracture of the left lateral occipital, extending between the lambdoid suture and the craniotomy cut (Figure ). In total, 14 fractures were identified in the cranial vault. In the left parietal, Fractures 2 through 5 all communicated with Fracture 1. Fractures 6 and 7 did not intersect with any other fractures in the portion of the cranial vault that was evaluated. The right parietal presented a complicated fracture pattern with all identified fractures [ , , , , , ] intersecting with at least one other fracture. Fracture 14 was the only fracture observed in the occipital. All fractures exhibited evidence of healing indicating they occurred antemortem and were consistent with blunt force trauma to the head. There was no evidence of perimortem trauma. It is important to note that each fracture does not represent an individual impact as multiple fractures may have occurred from the same traumatic event. The left arm exhibited multiple healing osseous injuries. A metaphyseal fracture was observed radiographically (Figure ) and grossly (Figure ) in the proximal humerus extending along the supero‐posterior and supero‐lateral metaphyseal margin and across the physeal surface (Figure ). Subperiosteal new bone formation was present along the margin. The metaphyseal fracture was largely healed in the lateral aspect with no visible fracture line, while the fracture line was visible posteriorly. Healing trauma was also identified in both the left radius and ulna. The left radius had subperiosteal new bone formation along the shaft, proximally at the radial tuberosity and distally at the metaphysis (Figure ). There was also a healing metaphyseal fracture of the anterior distal metaphysis (Figure ) that extends to the physeal surface of the metaphysis (Figure ). The left ulna exhibited evidence of a healed fracture in the distal one‐third of the diaphysis with the distal metaphysis rotated approximately 45 degrees medially relative to the proximal two‐thirds of the diaphysis (Figure ). Due to extensive remodeling, the fracture type was indeterminate, but the rotation of the distal ulnar metaphysis suggests an oblique or spiral fracture. There was also a metaphyseal fracture of the distal metaphysis of the ulna observed radiographically (Figure ) and grossly on the physeal surface (Figure ). The metaphyseal fractures of the humerus, radius, and ulna, and the fracture of the distal ulnar diaphysis suggest at least one traumatic event to the left arm. 2.5 Histological examination Three cranial fractures were excised for histological analysis prior to maceration to further investigate differential levels of healing. Fracture 4 grossly appeared to have rounded fracture margins and subperiosteal new bone formation. When observed histologically (Figure ); however, the fracture gap was resolving on the ectocranial and endocranial surfaces with bone formation and the gap was infiltrated with fibrous connective tissue and areas of cartilage formation. In addition, the healing fracture exhibited blurred fracture margins, numerous new capillaries, and new woven bone formation along and within the fracture gap. The unusual stair‐step fracture (Fracture 8) presented grossly with a wide fracture gap, rounded margins, and profuse subperiosteal new bone formation along the margins. Histological assessment indicated minimal bone resorption, moderate fibrous connective tissue that bridged the ectocranial fracture gap, and minimal new capillary formation (Figure ). The fracture margins were also misaligned, with the right side displaced inferiorly. Visual examination of Fracture 9 near the sagittal suture exhibited rounded margins and a distinct fracture gap while the area sampled, located inferiorly, presented less distinct evidence of healing. Histologically the sample did not exhibit a marked tissue response compared to the other histological samples. There was minimal fibrous connective tissue within the fracture gap, minimal new capillaries, and minimal evidence of bone resorption of the fracture margins (Figure ). Histological and gross examination of the cranial fractures indicated the presence of different degrees of fracture repair, providing evidence of at least three levels of healing in the calvarium. These different stages of healing include (1) an early stage in which the fracture margin was open with rounded fracture margins, minimal to moderate fibrous connective tissue, minimal new capillaries, and minimal evidence of bone resorption; (2) a reparative stage with complete infiltration of the fracture gap with extensive fibrous connective tissue and cartilaginous tissue, numerous new capillaries, and new woven bone along and within the fracture gap, and; (3) a remodeling stage in which the fracture margin was completely obliterated with persistent subperiosteal new bone formation as observed grossly in Fracture 1. 2.6 Additional investigative information Interviews with the infant's parents were prompted after it was discovered at autopsy that the infant had multiple fractures. Initially, the father claimed he accidentally hit the infant's head on a wall corner, but later admitted to repeatedly hitting the infant the night the child was found unresponsive; however, it was unclear how many times the child was struck as the father was under the influence of alcohol and marijuana. Eventually, the father admitted to repeated episodes of abuse in the attempt to quiet the infant, including routinely striking the infant's head, tipping the chin back and squeezing down on the neck, squeezing the infant's neck and skull while pushing back on the chin and squeezing his skull while covering the face. These mechanisms are modeled in Figure .
Medical history The infant was born vaginally at 33‐weeks gestational age with no recorded complications or medical interventions associated with the delivery. The child was hospitalized for prematurity. Both the infant and the mother tested positive for tetrahydrocannabinol (THC) following birth and a caseworker with Child Protective Services (CPS) was assigned to the family. A cranial ultrasound was performed 11 days after birth and was normal with no evidence for traumatic brain injury or cranial fractures. The infant was released from the hospital 25 days after birth and was under the care of his parents for 3 months until his death.
Autopsy findings The postmortem radiographic skeletal survey revealed multiple fractures to the neurocranium and radiographic evidence consistent with metaphyseal fractures of the left humerus, ulna, and radius (Figure ). External examination of the remains indicated florid petechial hemorrhage of the conjunctiva and face, most notable around the right eye. Two blue‐purple contusions, measuring 1.2 and 0.8 centimeters, were observed on the left arm. According to World Health Organization growth charts, the infant's body weight at the time of postmortem examination was less than the 5th percentile, the crown‐heel length was in the 10th percentile, and the head circumference was less than the 5th percentile for a 10‐week adjusted age male. Internal examination revealed hemorrhagic cerebrospinal fluid, two areas of hemorrhage in the soft tissues of the left and right fronto‐parietal regions of the scalp, and corresponding hemorrhages in the soft‐tissues adherent to the skull underlying the scalp hemorrhages (Figure ). Multiple cranial fractures extending from the areas of soft tissue hemorrhage were also observed. Scant amounts of subarachnoid hemorrhage were noted on the brain. The brain was retained for neuropathological analysis. The skull cap and elements of the left arm were resected and submitted for anthropological analysis.
Neuropathology findings Consistent with the forensic pathology findings, neuropathological analysis revealed focal subacute subarachnoid hemorrhages in the right frontal parasagittal region and left inferior parietal lobule of the brain, approximately 1 centimeter in diameter and less than 1 centimeter in diameter, respectively. There were no other observed abnormalities, pathological conditions of the brain tissues, or retinal hemorrhages.
Anthropological examination The skull cap was radiographed (Figure ) and photographed (Figure ) prior to histological sampling and maceration, revealing numerous fractures. Histological samples were resected from three cranial fractures prior to processing (Figure ). The calvarium, left humerus, left radius, and left ulna were macerated in an incubator over a two‐week period with manual removal of the soft tissues. The left parietal demonstrated a comminuted, slightly depressed defect (Fracture 1) with four associated simple, linear, radiating fractures (Fractures 2 through 5), all with rounded fracture margins and extensive subperiosteal new bone formation (Figure ). The majority of the Fracture 1 defect had no remaining fracture gap and the inferior margin was nearly obliterated. While Fracture 2 extended between Fracture 1 and the anterior left parietal, Fractures 3 through 5 extended to the sagittal suture. Also evident in the left parietal region was a curvilinear fracture (Fracture 6) and linear fracture (Fracture 7) adjacent to the craniotomy cut (Figure ). Since only the cranial vault was available for anthropological evaluation, the extent, number, and location of terminal points of these fractures could not be evaluated, but rounded fracture margins and subperiosteal new bone formation associated with Fracture 7 provided evidence of healing. The right parietal exhibited a simple fracture (Fracture 8) with an unusual morphology, changing direction multiple times in a stair‐step pattern that extended between the sagittal suture and the craniotomy cut. Fracture 9, a complex linear fracture with branching (Fracture 10) extended between the sagittal suture with its other terminus in the right parietal. A simple, linear fracture (Fracture 11) connected Fractures 8 and 9. Fracture 12 is a simple curvilinear fracture which extends between Fracture 8 and the intersection of the lambdoid suture and craniotomy cut (Figure ). Fracture 13 is a simple, curvilinear fracture that extends between Fracture 12 and the lambdoid suture (Figure ). The final cranial fracture observed (Fracture 14) was a simple, linear fracture of the left lateral occipital, extending between the lambdoid suture and the craniotomy cut (Figure ). In total, 14 fractures were identified in the cranial vault. In the left parietal, Fractures 2 through 5 all communicated with Fracture 1. Fractures 6 and 7 did not intersect with any other fractures in the portion of the cranial vault that was evaluated. The right parietal presented a complicated fracture pattern with all identified fractures [ , , , , , ] intersecting with at least one other fracture. Fracture 14 was the only fracture observed in the occipital. All fractures exhibited evidence of healing indicating they occurred antemortem and were consistent with blunt force trauma to the head. There was no evidence of perimortem trauma. It is important to note that each fracture does not represent an individual impact as multiple fractures may have occurred from the same traumatic event. The left arm exhibited multiple healing osseous injuries. A metaphyseal fracture was observed radiographically (Figure ) and grossly (Figure ) in the proximal humerus extending along the supero‐posterior and supero‐lateral metaphyseal margin and across the physeal surface (Figure ). Subperiosteal new bone formation was present along the margin. The metaphyseal fracture was largely healed in the lateral aspect with no visible fracture line, while the fracture line was visible posteriorly. Healing trauma was also identified in both the left radius and ulna. The left radius had subperiosteal new bone formation along the shaft, proximally at the radial tuberosity and distally at the metaphysis (Figure ). There was also a healing metaphyseal fracture of the anterior distal metaphysis (Figure ) that extends to the physeal surface of the metaphysis (Figure ). The left ulna exhibited evidence of a healed fracture in the distal one‐third of the diaphysis with the distal metaphysis rotated approximately 45 degrees medially relative to the proximal two‐thirds of the diaphysis (Figure ). Due to extensive remodeling, the fracture type was indeterminate, but the rotation of the distal ulnar metaphysis suggests an oblique or spiral fracture. There was also a metaphyseal fracture of the distal metaphysis of the ulna observed radiographically (Figure ) and grossly on the physeal surface (Figure ). The metaphyseal fractures of the humerus, radius, and ulna, and the fracture of the distal ulnar diaphysis suggest at least one traumatic event to the left arm.
Histological examination Three cranial fractures were excised for histological analysis prior to maceration to further investigate differential levels of healing. Fracture 4 grossly appeared to have rounded fracture margins and subperiosteal new bone formation. When observed histologically (Figure ); however, the fracture gap was resolving on the ectocranial and endocranial surfaces with bone formation and the gap was infiltrated with fibrous connective tissue and areas of cartilage formation. In addition, the healing fracture exhibited blurred fracture margins, numerous new capillaries, and new woven bone formation along and within the fracture gap. The unusual stair‐step fracture (Fracture 8) presented grossly with a wide fracture gap, rounded margins, and profuse subperiosteal new bone formation along the margins. Histological assessment indicated minimal bone resorption, moderate fibrous connective tissue that bridged the ectocranial fracture gap, and minimal new capillary formation (Figure ). The fracture margins were also misaligned, with the right side displaced inferiorly. Visual examination of Fracture 9 near the sagittal suture exhibited rounded margins and a distinct fracture gap while the area sampled, located inferiorly, presented less distinct evidence of healing. Histologically the sample did not exhibit a marked tissue response compared to the other histological samples. There was minimal fibrous connective tissue within the fracture gap, minimal new capillaries, and minimal evidence of bone resorption of the fracture margins (Figure ). Histological and gross examination of the cranial fractures indicated the presence of different degrees of fracture repair, providing evidence of at least three levels of healing in the calvarium. These different stages of healing include (1) an early stage in which the fracture margin was open with rounded fracture margins, minimal to moderate fibrous connective tissue, minimal new capillaries, and minimal evidence of bone resorption; (2) a reparative stage with complete infiltration of the fracture gap with extensive fibrous connective tissue and cartilaginous tissue, numerous new capillaries, and new woven bone along and within the fracture gap, and; (3) a remodeling stage in which the fracture margin was completely obliterated with persistent subperiosteal new bone formation as observed grossly in Fracture 1.
Additional investigative information Interviews with the infant's parents were prompted after it was discovered at autopsy that the infant had multiple fractures. Initially, the father claimed he accidentally hit the infant's head on a wall corner, but later admitted to repeatedly hitting the infant the night the child was found unresponsive; however, it was unclear how many times the child was struck as the father was under the influence of alcohol and marijuana. Eventually, the father admitted to repeated episodes of abuse in the attempt to quiet the infant, including routinely striking the infant's head, tipping the chin back and squeezing down on the neck, squeezing the infant's neck and skull while pushing back on the chin and squeezing his skull while covering the face. These mechanisms are modeled in Figure .
DISCUSSION In this report, we detail the postmortem examination of a four‐month‐old infant who presented with petechiae of the face and eyes, scalp and subarachnoid hematomas, fourteen calvarial fractures, three metaphyseal fractures and subperiosteal new bone formation on the left arm, and a healed fracture in the left distal ulna diaphysis. In the differential diagnosis process, birth trauma or fractures due to prematurity in the cranium were ruled out based on the normal results of the cranial ultrasound performed during the infant's hospitalization after birth. In addition, there was no evidence of natural disease or accidental injury in the infant's history nor on examination. Based on the results of the autopsy, neuropathological, and anthropological analyses, the medical examiner concluded the death was a homicide. The cause of death was deemed asphyxia due to obstruction of the airways as evidenced by the multiple petechiae on the face and conjunctivae. The blunt force injuries to the head, evidenced by multiple skull fractures at varied states of healing, bloody cerebrospinal fluid, and scalp and subarachnoid hemorrhages, were recorded as contributory causes of death. While the pattern of skeletal injuries presented in this case example are consistent with clinical studies and forensic reports of child abuse, the cranial fracture pattern and purported mechanism of abuse are unusual. The infant in this case presented with multiple bilateral fractures in the parietals, consistent with Meservy and colleagues’ characterization of pediatric cranial fractures observed radiographically where multiple skull fractures, bilateral fractures, and fractures crossing sutures were more common in cases of abuse. Theoretical consideration of the pattern of linear fractures running perpendicularly to the sagittal suture appears to be consistent with low velocity bilateral compression applied to the lateral aspects of cranial vault. This is supported by the findings of a study by Hiss and Kahana who reported bilateral temporoparietal fractures were only observed in infants who experienced bilateral compression of the head. A slow loading compression force is also congruent with the abusive mechanism described by the suspect whereby the palm of the hand covers the face of the child, and the fingers squeeze the lateral aspects of the cranium (Figure ). Furthermore, the curvilinear fractures (Fractures 6 and 12), and the depressed fracture on the left parietal (Fracture 1) may represent focal trauma from the fingers and thumb to the supra‐auricular sides of the head. All of the cranial fractures were antemortem with evidence of healing. Fracture 1 was the most advanced in fracture repair with nearly obliterated fracture margins indicating remodeling. Histological analysis revealed two potential additional stages of healing at the microscopic level evidenced by the progression from soft‐tissue response to woven bone formation. Using the histological method published by Naqvi and colleagues to estimate the age of fractures in infants, Fractures 8 and 9 with fibrin formation and fibrous connective tissue/granulation tissue would most likely occur between 12 hours and 3 days after injury. Fracture 4 with granulation tissue, cartilage, and woven bone would be consistent with an injury that is 5 to 7 days old. Although there was no histological sample taken of the fracture with the most advanced gross healing (Fracture 1), Naqvi and colleagues indicate fracture union occurred in most fractures in their sample 22 to 28 days after injury . However, Naqvi et al. do not identify the skeletal elements utilized to develop the method nor has the method been independently validated on a sample of infant cranial fractures where the time elapsed since injury is known. The lack of research and methods for the accurate estimation of infant cranial fracture age limits the ability to correlate the histologic differences between the fractures in this case with different traumatic events. Additional factors such as differences in the extent of the fracture gaps and reinjury of a pre‐existing fracture further convolute determination of multiple traumatic events. The stair‐step morphology of Fracture 8 was also atypical in this case example, as this type of fracture has not previously been reported in the literature except in cases of thermal fracturing . Considering the reported abuse mechanism, the stair‐step pattern of Fracture 8 could represent multiple contiguous fractures from repeated loading of the skull at different times with variation of the focal points of compressive force (i.e., the placement of the hand and fingers). As Berryman and colleagues describe, antemortem fractures may lengthen as the result of a new traumatic event if the energy imparted cannot be dissipated by the preexisting fracture. However, the morphology of Fracture 8 is not observed elsewhere in the literature nor have there been controlled fractography studies demonstrating the pattern. As fractures initiate, they are expected to be straight, propagating perpendicular to the maximum tensile stresses, but the propagation path is influenced by intrinsic and extrinsic conditions . The conditions under which the stair‐step pattern will occur remain unknown. Provided the current lack of controlled experiments on cranial fracture patterns and healing, the fracture pattern of the cranium cannot solely indicate repeated traumatic events and a history of abuse; however, clarity as to the circumstances in which these injuries occurred is provided by the postcranial injuries which are highly correlated with physical abuse. Metaphyseal fractures in infants are due to the greater susceptibility of the developing trabeculae in the primary spongiosa of long bone metaphysis to planar failure near the bone's proximal or distal end and are highly correlated with inflicted injury in young children – particularly non‐ambulatory infants [ , , , ]. Kleinman and colleagues have suggested metaphyseal injuries are produced when forces of torsion and/or tension are exerted on an infants' extremities. These forces are associated with yanking or twisting of the arms and legs or the uncontrolled flailing of the limbs during shaking episodes [ , , ]. The medial rotation of the distal ulnar head also indicates a torsional force applied over the left extremity causing a fracture.
CONCLUSION The interpretation of fracture pattern and timing is of utmost importance in the differential diagnosis of trauma, particularly if the injuries could be the result of either an accident or from an inflicted injury and not from documented birth trauma or due to an underlying health condition. Often, these determinations are contingent upon the type and location of fractures, the age and developmental status of the child, and the history provided by the caretaker. This report is illustrative of an infant with multiple injuries sustained over time. In this case, the perpetrator's admission is consistent with the observed cranial fracture patterns. However, the mechanism of the injuries should be approached with caution since there is no appropriate research to support the specific injury patterns. Furthermore, this case provides radiographic, gross, and histological evidence of healing cranial and postcranial fractures. The limitations imposed on the interpretation of the fracture pattern and histological data in this case demonstrate the need for increased research into fracture propagation under variable intrinsic and extrinsic factors and the progression of histomorphological repair in the pediatric cranium.
The histological component of this work was supported by the National Institute of Justice [award number 2017‐DN‐BX‐0166]. The opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect those of the U.S. Department of Justice.
The authors declare there are no conflicts of interest.
|
The basics of commonly used molecular techniques for diagnosis, and application of molecular testing in cytology
|
a54be9ac-b5c6-4518-899c-e2fc96a04ddc
|
10098847
|
Pathology[mh]
|
INTRODUCTION The rapid advance of molecular techniques, especially next‐generation sequencing (NGS), makes molecular testing feasible in daily practice. , , Molecular genetic/genomic testing plays a vital role in diagnosis, prognosis, and prediction to therapy by testing various biomarkers in a number of diseases and has become an essential part of personalized medicine. Cytopathology has been a tremendous diagnostic tool for examining morphology at the cellular level. The cytology samples are obtained by minimally invasive procedures that are usually well tolerated by patients, especially patients with comorbidities. Through utilization of cytology samples, pathologists can make a morphologic diagnosis that is associated with risk of malignancy to help with patient management. However, cytopathology has its limitations, the most common being limited diagnostic material in cytology samples. Molecular testing, in these situations, can really help refine morphologic diagnosis by further risk stratification, as well as providing additional prognostic and predictive information to help select the right patients for personalized treatment. , This is especially important for patients who are in the advanced stages of their diseases and cannot tolerate more invasive procedures, making cytology samples the only available material for molecular testing. Various preparations can be made from cytology samples, including direct smears, liquid‐based cytology (LBC), and cell blocks. Different fixatives are used in these preparations. In general, non‐formalin fixed cytology materials are more suitable for molecular testing than formalin‐fixed tissue or cell blocks, mainly because these samples provide well‐preserved high‐quality nucleic acids that are easily extractable and stable. , , Even though the cytology samples are better material for molecular testing; they have been under‐utilized and less known to many pathologists and treating clinicians. Currently, most molecular tests are performed in formalin‐fixed paraffin‐embedded (FFPE) samples, mainly because these tests are validated on FFPE samples. , , However, many studies have shown that cytology preparations such as smears and LBC have at least comparable, if not better, performance in molecular testing than FFPE samples. , Therefore, cytology samples could be an excellent resource for molecular testing if samples are properly collected and molecular tests are carefully validated. In addition, rapid on‐site evaluation (ROSE) performed by cytopathologists can help ensure adequate samples are obtained for downstream molecular testing. While most molecular testing is performed on FFPE tissue samples there are several well‐established molecular diagnostic assays, commercial and lab‐developed tests (LDT) that can be performed with cytology samples. For instance, molecular testing in cytologically indeterminate thyroid nodules can help further risk‐stratify these nodules. Afirma (Veracyte, Inc.) and Thyroseq (University of Pittsburgh Medical Center, Pittsburgh, PA, and CBLPath, Inc.) assays are widely used commercially available molecular tests for this indication. , , NGS‐based molecular testing has been applied in cytology samples obtained by endobronchial ultrasound‐guide fine needle aspiration (EBUS‐FNA) to molecular profile non‐small cell lung cancer (NSCLC) to detect potentially actionable mutations in select patients. , , , , , , , The same samples can also be utilized to perform PD‐L1 immunocytochemistry to select and predict immunotherapy treatment response. Molecular testing can also be used in cytology samples obtained by endoscopic retrograde cholangiopancreatography (ERCP) to help make more definitive diagnoses of lesions in the pancreaticobiliary tract. , , As molecular testing has become essential for patient care and is often requested to be performed using cytology samples, pathologists must understand the basics of molecular diagnostic methodology, indications for molecular testing, and how to best utilize various cytological samples for molecular testing. As with all laboratory tests, good starting material is crucial for accurate test results. This review will focus on the pre‐analytical factors that may influence the reliability of molecular testing. Additionally, this review will summarize common molecular techniques utilized in molecular testing of cytology specimens in daily practice. PRE‐ANALYTICAL CONSIDERATIONS 2.1 Slide preparation There are more pre‐analytical variables in cytology specimens compared to routine FFPE tissue samples. There are different fixatives and staining methods used in cytology slide preparations. The common cytology specimens consist of direct smears, LBC, and cell blocks. , , , , The updated molecular testing guideline for the selection of lung cancer patients for treatment with targeted tyrosine kinase inhibitors from the College of American Pathologists (CAP), the International Association for Molecular Pathology (AMP), and the Association for Molecular Pathology (AMP) recommend that any cytology sample, with adequate cellularity and preservation, may be used for molecular testing (Table ). Direct smears are typically created during fine needle aspiration (FNA) procedures. Different types of glass slides can be used while producing these smears. Studies show that fully frosted slides keep the highest cell retention and minimal cell loss during fixation, compared to the positively charged slides and non‐frosted slides. However, dislodging tumor cells from fully frosted slides can be challenging. Therefore, fully frosted slides are less frequently used for nucleic acid extraction. , Direct smears typically are enriched for tumor cells with intact whole nuclei rather than fragments of nuclei on the cell block or FFPE histology slides. One cellular FNA slide can contain up to 1,000,000 tumor cells. The minimum number of cells in the current NGS tests typically require 1000–5000 tumor cells, with a minimum tumor percentage of 20%. , , , The cells can be extracted from the slides using scraping or cell lifting with Pinpoint solution. Direct scraping of archival slides has a higher nucleic acid yield than cell lifting. , Microdissection of whole cells on the smears will help with tumor enrichment. Direct smears can be both alcohol‐fixed or air‐dried and they are both suitable for isolation of high quality nucleic acids. , , Nucleic acids are better preserved in alcohol than in formalin. Many studies have shown improved nucleic acid quality and NGS performance with smears compared to cell blocks. , , , Direct smears are commonly stained with Papanicolaou or Diff Quik staining methods. It has been shown that using direct smears with both staining methods, nucleic acids can be successfully isolated for various molecular tests. This is also true of archived smears. One of the limitations of using direct smears for molecular testing is the potential medicolegal issue of sacrificing diagnostic material as the smears are not reproducible. In addition, most molecular assays are validated using formalin‐fixed tissues, so a new sample type, like direct smears, may require a separate validation to satisfy CLIA/CAP validation guidelines. CAP also recommends preserving the diagnostic material on smears; however, in cases where the diagnostic smears must be harvested for indicated ancillary testing, photographs of diagnostic material or digitization of the slides is acceptable replacements for the original diagnostic material. , Liquid‐based cytology is another common cytology specimen. The samples are fixed with CytoLyt (Hologic) or CytoRich Red (Fisher Scientific, UK), and the slides are usually stained with Papanicolaou staining. LBC slides have been shown to have minimal differences in adequacy and in mutation detection rate compared to direct smears. , The molecular analysis can be performed by either scraping off the cells from the monolayer slide or using the suspended sample in the fixative solution. , Microdissection is possible but is more challenging on the slides compared to the direct smears due to the constricted area of cell deposition on the monolayer slide. , Cell blocks have been used more frequently than other non‐formalin fixed cytology specimens for molecular testing. These cell blocks can generate multiple sections, allowing for diagnostics slides to be retained while providing material for molecular testing. Additionally, the similarity of these cell blocks to traditional histology blocks means that a molecular assay previously validated for using FFPE samples will also be validated for using cell blocks. , , , , , , , , , , , , , , , , , However, cell blocks have limitations similar to traditional FFPE samples. Insufficient cellularity of cell blocks is the most common problem encountered in molecular testing. In addition, the quality of nucleic acids extracted from the cell block material is not as high as from non‐formalin fixed cytology specimens. If tumor cellularity meets the minimal level of detection of the assay and nucleic acid amount meets minimal test specifications, paraffin scrolls can be cut from the block and placed directly into a microcentrifuge tube for nucleic acid extraction. Otherwise, unstained sections can be cut and nucleic acids can be extracted by cell lifting or scraping from the unstained slides. Supernatant fluids obtained after cell pelleting and centrifugation during cytology specimen preparation can also be used for molecular testing. These samples are commonly discarded at the end of preparation. However, there are nucleic acid residues in these supernatant solutions that can be extracted for potential future use in molecular testing. , , A number of studies have exploited this possibility and found that DNA extracted from supernatant of FNA of various organs can be reliably utilized in NGS analysis and the results are comparable with the FNA‐tissue derive DNA. , , , , , , 2.2 Sample quality Several factors should be considered when submitting material for molecular testing, including, total cellularity, level of necrosis present, and the tumor content of the specimen. , , , , One of the greatest barriers to obtaining molecular results using cytology specimens is the low cellularity often obtained during sample preparations. While the cellularity required can vary greatly and depends on the molecular assay's nucleic acid requirements, it is important to submit specimens with the highest cellular content for molecular testing. Additional consideration should be given to selecting samples with limited (<20%) necrosis. Higher levels of necrosis can increase the amount of poor‐quality nucleic acids present in an extract and further inhibit optimal PCR amplification. The percentage of neoplastic cells in a sample required for accurate molecular testing depends on the molecular method's sensitivity. While there remains some variability in this estimate, the general rule of thumb is to provide a sample with a tumor content twice that of the limit of detection of the assay. However, for assays with higher sensitivity, like NGS, many laboratories will increase that cut‐off to account for possible tumor heterogeneity. Submitting a specimen with the highest tumor content available will decrease the concern of a false negative result. 2.3 Nucleic acid extraction The first step in any molecular assay is proper isolation, purification, and extraction of nucleic acids (DNA and/or RNA) from a prepared specimen. A spectrum of samples can be used, including frozen, fresh, or fixed tissue, aspirate smears, and blood. Aspirate smears are the most commonly submitted cytology specimens and should be provided on fixed non‐cover‐slipped slides or from deparaffinizing 5–20‐micron glass slide sections obtained from the corresponding FFPE cell blocks. Specific care needs to be taken when fixed or decalcified samples are sent for molecular testing because many of these reagents can result in DNA damage. Fixatives and decalcifying agents, such as 10% buffered formalin for fixation and EDTA for decalcification should be used to avoid nucleic acid degradation. In the modern molecular laboratory, numerous extraction methodologies have been successfully employed to obtain high quality nucleic acids. The methodologies range from historical chemical extractions; to phenol‐chloroform and proteinase K‐based methods , to more modern physical extraction methods using magnetic beads or column‐based purifications. , While these methodologies differ in principle, they all serve to remove contaminants such as proteins and lipids that can inhibit downstream amplification techniques central to most molecular assays. Method selection will largely depend on technology availability, the volume of testing, and the types of nucleic acids utilized for testing. Methods that purify only DNA are typically used to identify single nucleotide variants, insertions and deletions, and some copy number variants. While methods to extract RNA are typically employed to detect gene fusions or changes in gene expression. For more comprehensive methods, including many NGS panels, both RNA and DNA are required. Therefore, strategies to extract total nucleic acids will be selected to minimize the amount of tissue needed for analysis. Upon extraction, the nucleic acid yield and quality are assessed through absorbance methods (e.g., NanoDrop™, ThermoFisher) or fluorescence methods (e.g., Quibit, ThermoFisher). Using a UV–vis spectrophotometer, absorbance at a wavelength at which nucleic acids absorb light most strongly (260 nm or A260) is taken, and nucleic acid quantity is calculated using the Beer–Lambert law, which predicts a linear change in absorbance with concentration. It is important to note that both DNA and RNA absorb light at A260; therefore, other nucleic acid contaminants may overestimate the actual yield. Sample purity can also be evaluated by determining absorbance ratios of nucleic acids (A260) with absorption by aromatic rings in amino acids (A280), and absorption by organic compounds and chaotropic salts (A230). The higher these ratios (A260/A280 and A260/A230), the greater the purity with the A260/A280 ratio of highly pure DNA ranging between 1.7–2.0 and RNA 1.8–2.3. Modern fluorescence methods rely on fluorescent dyes that selectively bind to the specific nucleic acid being measured (i.e., double‐stranded DNA, RNA, etc.). This selectivity allows for greater specificity and sensitivity over traditional absorbance methods, especially at lower nucleic acid concentrations. These dyes will emit an excitation wavelength that a fluorometer can measure. Nucleic acid yield can then be calculated by comparing the amount of fluorescence in the sample with that of a known standard curve. The accuracy of the final concentration is therefore dependent on the standard curve, making appropriate selection of the reference material imperative. Slide preparation There are more pre‐analytical variables in cytology specimens compared to routine FFPE tissue samples. There are different fixatives and staining methods used in cytology slide preparations. The common cytology specimens consist of direct smears, LBC, and cell blocks. , , , , The updated molecular testing guideline for the selection of lung cancer patients for treatment with targeted tyrosine kinase inhibitors from the College of American Pathologists (CAP), the International Association for Molecular Pathology (AMP), and the Association for Molecular Pathology (AMP) recommend that any cytology sample, with adequate cellularity and preservation, may be used for molecular testing (Table ). Direct smears are typically created during fine needle aspiration (FNA) procedures. Different types of glass slides can be used while producing these smears. Studies show that fully frosted slides keep the highest cell retention and minimal cell loss during fixation, compared to the positively charged slides and non‐frosted slides. However, dislodging tumor cells from fully frosted slides can be challenging. Therefore, fully frosted slides are less frequently used for nucleic acid extraction. , Direct smears typically are enriched for tumor cells with intact whole nuclei rather than fragments of nuclei on the cell block or FFPE histology slides. One cellular FNA slide can contain up to 1,000,000 tumor cells. The minimum number of cells in the current NGS tests typically require 1000–5000 tumor cells, with a minimum tumor percentage of 20%. , , , The cells can be extracted from the slides using scraping or cell lifting with Pinpoint solution. Direct scraping of archival slides has a higher nucleic acid yield than cell lifting. , Microdissection of whole cells on the smears will help with tumor enrichment. Direct smears can be both alcohol‐fixed or air‐dried and they are both suitable for isolation of high quality nucleic acids. , , Nucleic acids are better preserved in alcohol than in formalin. Many studies have shown improved nucleic acid quality and NGS performance with smears compared to cell blocks. , , , Direct smears are commonly stained with Papanicolaou or Diff Quik staining methods. It has been shown that using direct smears with both staining methods, nucleic acids can be successfully isolated for various molecular tests. This is also true of archived smears. One of the limitations of using direct smears for molecular testing is the potential medicolegal issue of sacrificing diagnostic material as the smears are not reproducible. In addition, most molecular assays are validated using formalin‐fixed tissues, so a new sample type, like direct smears, may require a separate validation to satisfy CLIA/CAP validation guidelines. CAP also recommends preserving the diagnostic material on smears; however, in cases where the diagnostic smears must be harvested for indicated ancillary testing, photographs of diagnostic material or digitization of the slides is acceptable replacements for the original diagnostic material. , Liquid‐based cytology is another common cytology specimen. The samples are fixed with CytoLyt (Hologic) or CytoRich Red (Fisher Scientific, UK), and the slides are usually stained with Papanicolaou staining. LBC slides have been shown to have minimal differences in adequacy and in mutation detection rate compared to direct smears. , The molecular analysis can be performed by either scraping off the cells from the monolayer slide or using the suspended sample in the fixative solution. , Microdissection is possible but is more challenging on the slides compared to the direct smears due to the constricted area of cell deposition on the monolayer slide. , Cell blocks have been used more frequently than other non‐formalin fixed cytology specimens for molecular testing. These cell blocks can generate multiple sections, allowing for diagnostics slides to be retained while providing material for molecular testing. Additionally, the similarity of these cell blocks to traditional histology blocks means that a molecular assay previously validated for using FFPE samples will also be validated for using cell blocks. , , , , , , , , , , , , , , , , , However, cell blocks have limitations similar to traditional FFPE samples. Insufficient cellularity of cell blocks is the most common problem encountered in molecular testing. In addition, the quality of nucleic acids extracted from the cell block material is not as high as from non‐formalin fixed cytology specimens. If tumor cellularity meets the minimal level of detection of the assay and nucleic acid amount meets minimal test specifications, paraffin scrolls can be cut from the block and placed directly into a microcentrifuge tube for nucleic acid extraction. Otherwise, unstained sections can be cut and nucleic acids can be extracted by cell lifting or scraping from the unstained slides. Supernatant fluids obtained after cell pelleting and centrifugation during cytology specimen preparation can also be used for molecular testing. These samples are commonly discarded at the end of preparation. However, there are nucleic acid residues in these supernatant solutions that can be extracted for potential future use in molecular testing. , , A number of studies have exploited this possibility and found that DNA extracted from supernatant of FNA of various organs can be reliably utilized in NGS analysis and the results are comparable with the FNA‐tissue derive DNA. , , , , , , Sample quality Several factors should be considered when submitting material for molecular testing, including, total cellularity, level of necrosis present, and the tumor content of the specimen. , , , , One of the greatest barriers to obtaining molecular results using cytology specimens is the low cellularity often obtained during sample preparations. While the cellularity required can vary greatly and depends on the molecular assay's nucleic acid requirements, it is important to submit specimens with the highest cellular content for molecular testing. Additional consideration should be given to selecting samples with limited (<20%) necrosis. Higher levels of necrosis can increase the amount of poor‐quality nucleic acids present in an extract and further inhibit optimal PCR amplification. The percentage of neoplastic cells in a sample required for accurate molecular testing depends on the molecular method's sensitivity. While there remains some variability in this estimate, the general rule of thumb is to provide a sample with a tumor content twice that of the limit of detection of the assay. However, for assays with higher sensitivity, like NGS, many laboratories will increase that cut‐off to account for possible tumor heterogeneity. Submitting a specimen with the highest tumor content available will decrease the concern of a false negative result. Nucleic acid extraction The first step in any molecular assay is proper isolation, purification, and extraction of nucleic acids (DNA and/or RNA) from a prepared specimen. A spectrum of samples can be used, including frozen, fresh, or fixed tissue, aspirate smears, and blood. Aspirate smears are the most commonly submitted cytology specimens and should be provided on fixed non‐cover‐slipped slides or from deparaffinizing 5–20‐micron glass slide sections obtained from the corresponding FFPE cell blocks. Specific care needs to be taken when fixed or decalcified samples are sent for molecular testing because many of these reagents can result in DNA damage. Fixatives and decalcifying agents, such as 10% buffered formalin for fixation and EDTA for decalcification should be used to avoid nucleic acid degradation. In the modern molecular laboratory, numerous extraction methodologies have been successfully employed to obtain high quality nucleic acids. The methodologies range from historical chemical extractions; to phenol‐chloroform and proteinase K‐based methods , to more modern physical extraction methods using magnetic beads or column‐based purifications. , While these methodologies differ in principle, they all serve to remove contaminants such as proteins and lipids that can inhibit downstream amplification techniques central to most molecular assays. Method selection will largely depend on technology availability, the volume of testing, and the types of nucleic acids utilized for testing. Methods that purify only DNA are typically used to identify single nucleotide variants, insertions and deletions, and some copy number variants. While methods to extract RNA are typically employed to detect gene fusions or changes in gene expression. For more comprehensive methods, including many NGS panels, both RNA and DNA are required. Therefore, strategies to extract total nucleic acids will be selected to minimize the amount of tissue needed for analysis. Upon extraction, the nucleic acid yield and quality are assessed through absorbance methods (e.g., NanoDrop™, ThermoFisher) or fluorescence methods (e.g., Quibit, ThermoFisher). Using a UV–vis spectrophotometer, absorbance at a wavelength at which nucleic acids absorb light most strongly (260 nm or A260) is taken, and nucleic acid quantity is calculated using the Beer–Lambert law, which predicts a linear change in absorbance with concentration. It is important to note that both DNA and RNA absorb light at A260; therefore, other nucleic acid contaminants may overestimate the actual yield. Sample purity can also be evaluated by determining absorbance ratios of nucleic acids (A260) with absorption by aromatic rings in amino acids (A280), and absorption by organic compounds and chaotropic salts (A230). The higher these ratios (A260/A280 and A260/A230), the greater the purity with the A260/A280 ratio of highly pure DNA ranging between 1.7–2.0 and RNA 1.8–2.3. Modern fluorescence methods rely on fluorescent dyes that selectively bind to the specific nucleic acid being measured (i.e., double‐stranded DNA, RNA, etc.). This selectivity allows for greater specificity and sensitivity over traditional absorbance methods, especially at lower nucleic acid concentrations. These dyes will emit an excitation wavelength that a fluorometer can measure. Nucleic acid yield can then be calculated by comparing the amount of fluorescence in the sample with that of a known standard curve. The accuracy of the final concentration is therefore dependent on the standard curve, making appropriate selection of the reference material imperative. ANALYTICAL METHODS COMMONLY USED IN CYTOPATHOLOGY 3.1 Polymerase chain reaction Polymerase chain reaction (PCR) is the process by which a target of DNA or cDNA created from RNA is amplified to create millions of copies of a specific genomic region. This process allows for the detection of a wide variety of genomic variants including, single nucleotide variants (SNVs), also referred to as somatic point mutations, deletions and insertions, copy number variants, and gene fusions depending on assay design. This technique can be leveraged to detect a single known target or multiple targets in a single reaction (multiplex PCR) either qualitatively by end‐point PCR or quantitatively by quantitative PCR (qPCR or real‐time PCR). All PCR reactions require a double‐stranded DNA or a cDNA template, short target gene‐specific oligonucleotide primers, DNA polymerase, the four deoxyribonucleotide bases (dNTPs), buffer, KCl, and MgCl 2 . The template contains the desired region of nucleotides to be assayed and must be double‐stranded. For single‐stranded RNA to be amplified by PCR, it must first be converted to a double‐stranded cDNA molecule. This process, known as Reverse Transcriptase PCR (RT‐PCR), requires a reverse transcriptase (RT) enzyme, a targeting oligonucleotide, and the same buffers and reagents used in a PCR reaction. The targeting oligonucleotide may be either specific to the RNA target of interest or, more commonly, universally bind all RNA to create a complete cDNA library. These universal oligonucleotides are typically a series of random hexamers or target the poly‐A tract of mRNA. Once the oligonucleotide hybridizes with the RNA, the RT polymerase will read and add nucleotide bases (dNTPs) to the template RNA strand resulting in one double‐stranded cDNA molecule ready for use in downstream PCR reactions. Along with the appropriate double‐stranded template, each PCR reaction must contain oligonucleotide primers that will flank the region or regions of interest. Additional design modifications to primers, including the addition of fluorophores, may be considered depending on the downstream application. A thermostable DNA polymerase facilitates the addition of nucleotide bases to extend the primers making a copy of the template. The added buffers maintain the pH of the reaction, KCl aids in proper primer hybridization, and the MgCl 2 enhances polymerase activity. The PCR reaction occurs inside an automated Thermal Cycler and consists of repeated cycles of temperature‐dependent denaturation, annealing, and DNA synthesis. In the denaturation step, the DNA double helix is broken by denaturing the bonds at high temperatures (>94°C), producing two single‐stranded DNA templates. The temperature is then lowered to allow for the annealing of oligonucleotide primers and probes to the DNA strand. Once annealed, the DNA polymerase can start catalyzing the elongation of the new DNA strand. This temperature cycling is repeated for a set number of times, dependent on the amount of amplification required for downstream application, with the result of 2 N copies of the targeted sequence, or amplicons, with N being the number of cycles performed (Figure ). The process by which these PCR products are detected and analyzed is dependent on the intended downstream application. This review will further discuss some commonly used applications including, qPCR, fragment analysis (Sanger Sequencing), and next‐generation sequencing (NGS). 3.2 Quantitative PCR Some clinical applications, including identifying differential methylation, genotyping known single nucleotide variants, or quantifying gene fusions transcripts, require accurately determining the amount of a target sequence present in the source sample. To accomplish this, qPCR chemistries rely on the use of fluorescence‐labeled oligonucleotide probes added to a PCR reaction described previously. These probes are designed to hybridize within the amplified target region, providing increased specificity to the PCR reaction. When the fluorophore absorbs light energy at a particular wavelength, it simultaneously emits energy at a lower wavelength than a qPCR instrument can measure. Rather than detecting the amount of fluorescence at the end of the PCR reaction, qPCR will read the fluorescence at the end of every PCR cycle allowing for visualization of the exponential amplification of the target. To get around the problem of non‐specific fluorescence of unbound probes during cycling, these probes are designed to take advantage of the principles of fluorescence resonance energy transfer or FRET. The principal mechanism of FRET is the transfer of energy from one fluorophore to another when placed in close proximity. Probe designs vary based on the chemistry of the qPCR assay and the intended downstream application. However, one common qPCR method employed in the clinical laboratory utilizes the inherent 5′ → 3′ exonuclease activity of Taq DNA polymerase. The qPCR probe is end‐labeled with two fluorophores, a reporter and a quencher. When the probe is intact, these fluorophores are separated by approximately 20–25 base pairs resulting in energy transfer. The emission wavelength of the reporter excites the quencher, effectively silencing the reporter by removing any residual energy and preventing it from being detected. However, during the PCR reaction, as Taq polymerase acts to copy the template strand, it will displace and degrade the qPCR probe separating the reporter and the quencher. At the end of the PCR cycle, a reading will be obtained, all free reporters will emit the detected wavelength, and residual intact probes will remain silenced. The fluorescent signal will increase exponentially until one of the reagents is exhausted, at which point each cycle will no longer result in a doubling of the target (Figure ). To calculate the number of starting copies present, the original sample must be compared to a known standard. Amplification curves of samples and standards are compared using the number of cycles it takes to reach a predetermined fluorescence threshold ( C t or C q ) value. This threshold must be within the exponential growth phase of all curves and will be dependent on the primers and probes used for the assay. A standard curve can be produced by determining the C t value of the known standards. The C t values of the known standards can then be used to determine a curve and by comparison to the relative quantity of the target in the starting material. Variations of this type of analysis make it possible to monitor things like tumor burden and minimal residual disease of a fusion using RNA transcripts. These types of analysis have also been adopted into semi and fully‐automated platforms like Biocartis Idylla™ that can go from FFPE sample to result in approximately 2 h. , This is covered in detail in other reviews. , 3.3 Fragment analysis by capillary electrophoresis The qualitative detection of PCR products is achieved through electrophoresis of amplified nucleic acid targets. In brief, in nucleic acid electrophoresis, an electrical current is applied to the negatively charged PCR product, which results in the migration of the product through a viscous medium allowing for size separation of the PCR products. These principles have been covered elsewhere in detail. In the clinical laboratory, agarose or polyacrylamide gel electrophoresis has largely been replaced by capillary electrophoresis. These instruments often contain multiple capillaries (8–96) filled with a viscous matrix that allows for accurate size separation that can be more easily scaled to the application and volumes required by the laboratory. Somatic applications include detection of pathogenic deletion and insertion, presence of pathogenic variation through restriction enzyme digestion, allele‐specific PCR, and Sanger sequencing. 3.4 Principles of Sanger sequencing Sanger sequencing can analyze genomics regions of approximately 300–1000 base pairs in size. Following PCR amplification, the purified PCR products are subjected to a sequencing reaction that includes enzymes, buffers, a sequencing primer, and a mixture of dNTPs and fluorescently labeled dideoxynucleotides (ddNTPs). As the enzyme works to create a copy of the PCR template either a dNTP or a ddNTP is incorporated into the extending strand. When a ddNTP is added, it prohibits the subsequent addition of dNTPs resulting in the termination of the extending strand in a process termed dideoxy chain termination. This process results in the creation of a library of DNA fragments with a fragment terminating at every base of the sequence. This fragment library is then analyzed by capillary electrophoresis and the fluorophore tagged ddNTP in each fragment is identified. The final genomic sequence is determined by converting the identified fluorophore to the known ddNTP carrying the fluorophore. This process is simplified by using various software tools designed to assemble and compare these sequences to a reference sequence. Variation is identified when either multiple peaks (or bases) are present at the same location or when the sequence determined is different from the reference sequencing (Figure ). Sanger sequencing has long been considered the gold standard for accurately detecting single nucleotide variants and deletions and insertions, as long as contained within the amplified PCR product. However, the applications for sanger sequencing of cytology specimens are limited by assay sensitivity with a tumor burden requirement of ~50% to avoid false negatives due to high tumor heterogeneity. However, many of the limitations of somatic sanger sequencing have been reduced through the wide adoption of advancing next‐generation sequencing methodologies. 3.5 Principles of next generation sequencing Advancements in sequencing technology have allowed for the high throughput sequencing of numerous molecules of DNA/RNA simultaneously, referred to as next‐generation sequencing (NGS) or massive parallel sequencing (MPS). This has opened the door to faster and more cost‐effective methods for sequencing thousands of amplicons at a single time. With the advent of NGS, it is now possible to sequence genes targeted for diagnosis, treatment, and prognosis of specific cancers (cancer hotspot panels), the entire exome (whole exome sequencing, WES), or even the genome (whole genome sequencing, WGS). Most commercially available NGS platforms are capable of short‐read sequencing (~150–200 base pairs of sequence) through sequence by ligation (SBL) or sequence by synthesis (SBS) chemistries, which are reviewed in additional detail elsewhere. In short‐read sequencing, sequencing libraries are typically prepared by fragmenting and end‐treating genomic DNA or cDNA libraries to prevent overhangs and unwanted strand extension. Additional oligonucleotides, known as adaptors, are hybridized or ligated to DNA strands to allow for the unique identification of each molecule and its source. These adaptors allow for simultaneously sequencing specimens from many patients on a single chip or flow cell. These adaptors also add proprietary sequences required for hybridization to the sequencing substrate. Targets in hotspot sequencing panels common in somatic sequencing are enriched either through PCR‐based amplification or probe hybridization and purification methods. Enrichment of these targets increases sequencing coverage by reducing off‐target sequencing and maximizing usable sequencing output. Next, the enriched libraries are clonally amplified to produce a template library suitable for sequencing. This clonal amplification occurs on either a solid surface (e.g., Illumina) or in an emulsion (e.g., IonTorrent, RainDance) and the increased signal produced during the sequencing of each molecule increases the overall accuracy of the sequencing reaction (Figure ). Upon completion of library preparation, target enrichment, and clonal amplification, the fragment library is ready for sequencing. Similar to Sanger sequencing, SBS chemistries determine base composition by detecting the signal that is emitted from newly incorporated nucleotides during the sequencing reaction. The signal capture technique differs depending on the sequencing platform. In SBS chemistries, the emitted signal consists of either a fluorophore (Illumina), a change in ionic concentration (Ion Torrent), or the detection of light (Pyrosequencing) (Figure ). While these short‐read NGS technologies are currently the most commonly performed sequencing assays in the clinical molecular laboratory, there are several limitations to this type of sequencing strategy. Short‐read sequencing generates reads that may not overlap one another, potentially decreasing target coverage. These sequencing chemistries have difficulty distinguishing complex long tandem repeats, as in centromeric regions and satellite arrays. This is particularly evident using Ion Torrent and pyrosequencing technologies when sequencing homopolymer regions. In addition, short‐read sequencing cannot determine on which allele the variant lies, or genetic phase. Another limitation is amplification bias, which occurs because GC or A‐ rich regions are amplified less efficiently. This limitation can introduce inaccuracies during the library preparation step after several cycles of amplification. Some of these limitations have been addressed by increasing the sequencing read length. These real‐time long‐read sequencing chemistries include the PacBio Single‐molecule real‐time (SMRT) sequencing and the Nanopore sequencing approach. In PacBio SMRT sequencing, two hairpin adaptors are ligated at the ends of the DNA template (called SMRTbell template) to allow for continuous circular sequencing. The SMRTbell template is then directed to the special zero‐mode waveguide (ZMW) wells, where sequencing is initiated. In the ZMW wells, fluorophore‐labeled nucleotides are added to the elongating DNA strand. The camera at the bottom of the well records the light emitted whenever a base is incorporated. Nanopore sequencer uses the native single‐stranded DNA fragment to detect the DNA composition directly. The DNA template for the nanopore consists of a double‐stranded lead adaptor that contains a specific sequence that helps guide the template into the charged protein pore. When the DNA strand passes through the pore, the voltage changes inside the pore and is recorded as DNA sequence (k‐mer). (Figure ). 3.6 The NGS bioinformatics pipeline Laboratories rely on several computational steps to make sense of data generated by NGS platforms. This bioinformatics process generally consists of three tiers; primary, secondary, and tertiary analysis (Figure ). Following the wet lab process described earlier, samples are sequenced, and data regarding the base incorporated in the extending DNA fragment is generated. This primary analysis includes specifics about each base called per cycle and the quality of each call. The secondary analysis uses the raw sequence that was identified through primary analysis and aligns the sequences against a reference genome. Variant calling tools detect any variation from the reference in the samples sequenced. Several publically available algorithms exist, and many additional proprietary variant callers have been developed and incorporated into assay‐specific bioinformatics pipelines. Finally, the tertiary analysis consists of visualizing, filtering, and annotating the variation identified. Variant calling tools detect thousands of gene variants. However, many of those are clinically irrelevant because they are associated with common variations within a population, lie in regions of the genome for which clinical and function information is unavailable, or may represent pipeline‐specific sequencing errors or artifacts. Several parameters are used to filter these variants allowing for a more focused approach to manual variant review. Some examples of filtering parameters include; population polymorphisms, functional domain filters, read depths, and various sequencing quality metrics, including strand bias and call quality. The results of these well‐validated filtering steps are the removal of the great majority of irrelevant variation and the inclusion of a more manageable number of potentially pathogenic variants for manual review. Variants are then classified on the evidence‐based strength of their prognostic, diagnostic, and therapeutic profile. In 2017, AMP/ASCO/CAP published guidelines to assist laboratories in classifying somatic variants to help standardize reporting of clinically relevant variants. In brief, variants with the strongest clinical significance in cancer (e.g., within national guidelines) for diagnosis, prognosis, or an FDA‐approved treatment are classified as tier 1, those with potential clinical significance in cancer (e.g., clinical trials, clinical or functional studies) are classified as tier 2, variants with unknown clinical significance (VUS) are classified as tier 3 and benign or likely being variants are classified as tier 4 (Figure ). 3.7 Additional considerations for NGS Historically, most standard NGS assays have a limit of detection of ~5% allele frequency. This limit of detection is determined by the number of reads (depth of coverage) for each base and the amount of sequencing error or background in the sequencing platform. To increase accuracy and error correction, a barcoding system has been implemented. These include unique molecular identifiers and unique dual index barcodes. They consist of short random DNA sequences that attach to the DNA library sequences, uniquely tagging each for later identification. Duplicates and false‐positive reads are parsed from rare variants as they go through the bioinformatics pipeline, referred to as error correction sequencing. This technology also allows for the digital quantification and tracking of clones and subclones for minimal residual disease measurement. Polymerase chain reaction Polymerase chain reaction (PCR) is the process by which a target of DNA or cDNA created from RNA is amplified to create millions of copies of a specific genomic region. This process allows for the detection of a wide variety of genomic variants including, single nucleotide variants (SNVs), also referred to as somatic point mutations, deletions and insertions, copy number variants, and gene fusions depending on assay design. This technique can be leveraged to detect a single known target or multiple targets in a single reaction (multiplex PCR) either qualitatively by end‐point PCR or quantitatively by quantitative PCR (qPCR or real‐time PCR). All PCR reactions require a double‐stranded DNA or a cDNA template, short target gene‐specific oligonucleotide primers, DNA polymerase, the four deoxyribonucleotide bases (dNTPs), buffer, KCl, and MgCl 2 . The template contains the desired region of nucleotides to be assayed and must be double‐stranded. For single‐stranded RNA to be amplified by PCR, it must first be converted to a double‐stranded cDNA molecule. This process, known as Reverse Transcriptase PCR (RT‐PCR), requires a reverse transcriptase (RT) enzyme, a targeting oligonucleotide, and the same buffers and reagents used in a PCR reaction. The targeting oligonucleotide may be either specific to the RNA target of interest or, more commonly, universally bind all RNA to create a complete cDNA library. These universal oligonucleotides are typically a series of random hexamers or target the poly‐A tract of mRNA. Once the oligonucleotide hybridizes with the RNA, the RT polymerase will read and add nucleotide bases (dNTPs) to the template RNA strand resulting in one double‐stranded cDNA molecule ready for use in downstream PCR reactions. Along with the appropriate double‐stranded template, each PCR reaction must contain oligonucleotide primers that will flank the region or regions of interest. Additional design modifications to primers, including the addition of fluorophores, may be considered depending on the downstream application. A thermostable DNA polymerase facilitates the addition of nucleotide bases to extend the primers making a copy of the template. The added buffers maintain the pH of the reaction, KCl aids in proper primer hybridization, and the MgCl 2 enhances polymerase activity. The PCR reaction occurs inside an automated Thermal Cycler and consists of repeated cycles of temperature‐dependent denaturation, annealing, and DNA synthesis. In the denaturation step, the DNA double helix is broken by denaturing the bonds at high temperatures (>94°C), producing two single‐stranded DNA templates. The temperature is then lowered to allow for the annealing of oligonucleotide primers and probes to the DNA strand. Once annealed, the DNA polymerase can start catalyzing the elongation of the new DNA strand. This temperature cycling is repeated for a set number of times, dependent on the amount of amplification required for downstream application, with the result of 2 N copies of the targeted sequence, or amplicons, with N being the number of cycles performed (Figure ). The process by which these PCR products are detected and analyzed is dependent on the intended downstream application. This review will further discuss some commonly used applications including, qPCR, fragment analysis (Sanger Sequencing), and next‐generation sequencing (NGS). Quantitative PCR Some clinical applications, including identifying differential methylation, genotyping known single nucleotide variants, or quantifying gene fusions transcripts, require accurately determining the amount of a target sequence present in the source sample. To accomplish this, qPCR chemistries rely on the use of fluorescence‐labeled oligonucleotide probes added to a PCR reaction described previously. These probes are designed to hybridize within the amplified target region, providing increased specificity to the PCR reaction. When the fluorophore absorbs light energy at a particular wavelength, it simultaneously emits energy at a lower wavelength than a qPCR instrument can measure. Rather than detecting the amount of fluorescence at the end of the PCR reaction, qPCR will read the fluorescence at the end of every PCR cycle allowing for visualization of the exponential amplification of the target. To get around the problem of non‐specific fluorescence of unbound probes during cycling, these probes are designed to take advantage of the principles of fluorescence resonance energy transfer or FRET. The principal mechanism of FRET is the transfer of energy from one fluorophore to another when placed in close proximity. Probe designs vary based on the chemistry of the qPCR assay and the intended downstream application. However, one common qPCR method employed in the clinical laboratory utilizes the inherent 5′ → 3′ exonuclease activity of Taq DNA polymerase. The qPCR probe is end‐labeled with two fluorophores, a reporter and a quencher. When the probe is intact, these fluorophores are separated by approximately 20–25 base pairs resulting in energy transfer. The emission wavelength of the reporter excites the quencher, effectively silencing the reporter by removing any residual energy and preventing it from being detected. However, during the PCR reaction, as Taq polymerase acts to copy the template strand, it will displace and degrade the qPCR probe separating the reporter and the quencher. At the end of the PCR cycle, a reading will be obtained, all free reporters will emit the detected wavelength, and residual intact probes will remain silenced. The fluorescent signal will increase exponentially until one of the reagents is exhausted, at which point each cycle will no longer result in a doubling of the target (Figure ). To calculate the number of starting copies present, the original sample must be compared to a known standard. Amplification curves of samples and standards are compared using the number of cycles it takes to reach a predetermined fluorescence threshold ( C t or C q ) value. This threshold must be within the exponential growth phase of all curves and will be dependent on the primers and probes used for the assay. A standard curve can be produced by determining the C t value of the known standards. The C t values of the known standards can then be used to determine a curve and by comparison to the relative quantity of the target in the starting material. Variations of this type of analysis make it possible to monitor things like tumor burden and minimal residual disease of a fusion using RNA transcripts. These types of analysis have also been adopted into semi and fully‐automated platforms like Biocartis Idylla™ that can go from FFPE sample to result in approximately 2 h. , This is covered in detail in other reviews. , Fragment analysis by capillary electrophoresis The qualitative detection of PCR products is achieved through electrophoresis of amplified nucleic acid targets. In brief, in nucleic acid electrophoresis, an electrical current is applied to the negatively charged PCR product, which results in the migration of the product through a viscous medium allowing for size separation of the PCR products. These principles have been covered elsewhere in detail. In the clinical laboratory, agarose or polyacrylamide gel electrophoresis has largely been replaced by capillary electrophoresis. These instruments often contain multiple capillaries (8–96) filled with a viscous matrix that allows for accurate size separation that can be more easily scaled to the application and volumes required by the laboratory. Somatic applications include detection of pathogenic deletion and insertion, presence of pathogenic variation through restriction enzyme digestion, allele‐specific PCR, and Sanger sequencing. Principles of Sanger sequencing Sanger sequencing can analyze genomics regions of approximately 300–1000 base pairs in size. Following PCR amplification, the purified PCR products are subjected to a sequencing reaction that includes enzymes, buffers, a sequencing primer, and a mixture of dNTPs and fluorescently labeled dideoxynucleotides (ddNTPs). As the enzyme works to create a copy of the PCR template either a dNTP or a ddNTP is incorporated into the extending strand. When a ddNTP is added, it prohibits the subsequent addition of dNTPs resulting in the termination of the extending strand in a process termed dideoxy chain termination. This process results in the creation of a library of DNA fragments with a fragment terminating at every base of the sequence. This fragment library is then analyzed by capillary electrophoresis and the fluorophore tagged ddNTP in each fragment is identified. The final genomic sequence is determined by converting the identified fluorophore to the known ddNTP carrying the fluorophore. This process is simplified by using various software tools designed to assemble and compare these sequences to a reference sequence. Variation is identified when either multiple peaks (or bases) are present at the same location or when the sequence determined is different from the reference sequencing (Figure ). Sanger sequencing has long been considered the gold standard for accurately detecting single nucleotide variants and deletions and insertions, as long as contained within the amplified PCR product. However, the applications for sanger sequencing of cytology specimens are limited by assay sensitivity with a tumor burden requirement of ~50% to avoid false negatives due to high tumor heterogeneity. However, many of the limitations of somatic sanger sequencing have been reduced through the wide adoption of advancing next‐generation sequencing methodologies. Principles of next generation sequencing Advancements in sequencing technology have allowed for the high throughput sequencing of numerous molecules of DNA/RNA simultaneously, referred to as next‐generation sequencing (NGS) or massive parallel sequencing (MPS). This has opened the door to faster and more cost‐effective methods for sequencing thousands of amplicons at a single time. With the advent of NGS, it is now possible to sequence genes targeted for diagnosis, treatment, and prognosis of specific cancers (cancer hotspot panels), the entire exome (whole exome sequencing, WES), or even the genome (whole genome sequencing, WGS). Most commercially available NGS platforms are capable of short‐read sequencing (~150–200 base pairs of sequence) through sequence by ligation (SBL) or sequence by synthesis (SBS) chemistries, which are reviewed in additional detail elsewhere. In short‐read sequencing, sequencing libraries are typically prepared by fragmenting and end‐treating genomic DNA or cDNA libraries to prevent overhangs and unwanted strand extension. Additional oligonucleotides, known as adaptors, are hybridized or ligated to DNA strands to allow for the unique identification of each molecule and its source. These adaptors allow for simultaneously sequencing specimens from many patients on a single chip or flow cell. These adaptors also add proprietary sequences required for hybridization to the sequencing substrate. Targets in hotspot sequencing panels common in somatic sequencing are enriched either through PCR‐based amplification or probe hybridization and purification methods. Enrichment of these targets increases sequencing coverage by reducing off‐target sequencing and maximizing usable sequencing output. Next, the enriched libraries are clonally amplified to produce a template library suitable for sequencing. This clonal amplification occurs on either a solid surface (e.g., Illumina) or in an emulsion (e.g., IonTorrent, RainDance) and the increased signal produced during the sequencing of each molecule increases the overall accuracy of the sequencing reaction (Figure ). Upon completion of library preparation, target enrichment, and clonal amplification, the fragment library is ready for sequencing. Similar to Sanger sequencing, SBS chemistries determine base composition by detecting the signal that is emitted from newly incorporated nucleotides during the sequencing reaction. The signal capture technique differs depending on the sequencing platform. In SBS chemistries, the emitted signal consists of either a fluorophore (Illumina), a change in ionic concentration (Ion Torrent), or the detection of light (Pyrosequencing) (Figure ). While these short‐read NGS technologies are currently the most commonly performed sequencing assays in the clinical molecular laboratory, there are several limitations to this type of sequencing strategy. Short‐read sequencing generates reads that may not overlap one another, potentially decreasing target coverage. These sequencing chemistries have difficulty distinguishing complex long tandem repeats, as in centromeric regions and satellite arrays. This is particularly evident using Ion Torrent and pyrosequencing technologies when sequencing homopolymer regions. In addition, short‐read sequencing cannot determine on which allele the variant lies, or genetic phase. Another limitation is amplification bias, which occurs because GC or A‐ rich regions are amplified less efficiently. This limitation can introduce inaccuracies during the library preparation step after several cycles of amplification. Some of these limitations have been addressed by increasing the sequencing read length. These real‐time long‐read sequencing chemistries include the PacBio Single‐molecule real‐time (SMRT) sequencing and the Nanopore sequencing approach. In PacBio SMRT sequencing, two hairpin adaptors are ligated at the ends of the DNA template (called SMRTbell template) to allow for continuous circular sequencing. The SMRTbell template is then directed to the special zero‐mode waveguide (ZMW) wells, where sequencing is initiated. In the ZMW wells, fluorophore‐labeled nucleotides are added to the elongating DNA strand. The camera at the bottom of the well records the light emitted whenever a base is incorporated. Nanopore sequencer uses the native single‐stranded DNA fragment to detect the DNA composition directly. The DNA template for the nanopore consists of a double‐stranded lead adaptor that contains a specific sequence that helps guide the template into the charged protein pore. When the DNA strand passes through the pore, the voltage changes inside the pore and is recorded as DNA sequence (k‐mer). (Figure ). The NGS bioinformatics pipeline Laboratories rely on several computational steps to make sense of data generated by NGS platforms. This bioinformatics process generally consists of three tiers; primary, secondary, and tertiary analysis (Figure ). Following the wet lab process described earlier, samples are sequenced, and data regarding the base incorporated in the extending DNA fragment is generated. This primary analysis includes specifics about each base called per cycle and the quality of each call. The secondary analysis uses the raw sequence that was identified through primary analysis and aligns the sequences against a reference genome. Variant calling tools detect any variation from the reference in the samples sequenced. Several publically available algorithms exist, and many additional proprietary variant callers have been developed and incorporated into assay‐specific bioinformatics pipelines. Finally, the tertiary analysis consists of visualizing, filtering, and annotating the variation identified. Variant calling tools detect thousands of gene variants. However, many of those are clinically irrelevant because they are associated with common variations within a population, lie in regions of the genome for which clinical and function information is unavailable, or may represent pipeline‐specific sequencing errors or artifacts. Several parameters are used to filter these variants allowing for a more focused approach to manual variant review. Some examples of filtering parameters include; population polymorphisms, functional domain filters, read depths, and various sequencing quality metrics, including strand bias and call quality. The results of these well‐validated filtering steps are the removal of the great majority of irrelevant variation and the inclusion of a more manageable number of potentially pathogenic variants for manual review. Variants are then classified on the evidence‐based strength of their prognostic, diagnostic, and therapeutic profile. In 2017, AMP/ASCO/CAP published guidelines to assist laboratories in classifying somatic variants to help standardize reporting of clinically relevant variants. In brief, variants with the strongest clinical significance in cancer (e.g., within national guidelines) for diagnosis, prognosis, or an FDA‐approved treatment are classified as tier 1, those with potential clinical significance in cancer (e.g., clinical trials, clinical or functional studies) are classified as tier 2, variants with unknown clinical significance (VUS) are classified as tier 3 and benign or likely being variants are classified as tier 4 (Figure ). Additional considerations for NGS Historically, most standard NGS assays have a limit of detection of ~5% allele frequency. This limit of detection is determined by the number of reads (depth of coverage) for each base and the amount of sequencing error or background in the sequencing platform. To increase accuracy and error correction, a barcoding system has been implemented. These include unique molecular identifiers and unique dual index barcodes. They consist of short random DNA sequences that attach to the DNA library sequences, uniquely tagging each for later identification. Duplicates and false‐positive reads are parsed from rare variants as they go through the bioinformatics pipeline, referred to as error correction sequencing. This technology also allows for the digital quantification and tracking of clones and subclones for minimal residual disease measurement. CONCLUSIONS The application of molecular testing, especially NGS technology, to cytology samples plays a significant role in diagnosis, prognosis, and prediction to the potential treatment of various diseases. Molecular cytopathology, therefore, is essential in the era of personalized medicine. Cytology samples typically contain better preserved nucleic acids compared with formalin‐fixed tissue samples. In addition, these samples can be easily obtained by minimally invasive procedures, and in many situations, they are the only available material for further testing. With proper validation, various cytology specimens can be utilized appropriately to help refine uncertain morphologic diagnoses, and provided critical prognostic and predictive information about treatment plans. The modern cytopathologist needs to be familiar with the basics of molecular testing in cytology samples, including pre‐analytical considerations, various common molecular techniques, and the clinical utility of these tests. With this knowledge, the cytopathologists will be better informed and more engaged in patient care in the era of precision medicine. The authors declare no conflict of interest.
|
Prescriber Perspectives on Biosimilar Adoption and Potential Role of Clinical Pharmacology: A Workshop Summary
|
5557cc8b-90bf-4017-9512-5ef3a9d4c41b
|
10099086
|
Pharmacology[mh]
|
Adoption of biosimilars typically occurs quickly after market introduction. However, biosimilar market share varies widely across product classes, as a function of time on the market since launch and the number of marketed biosimilars within the product class. As of September 2021, the biosimilar market share ranged between 3% for insulin glargine (this only includes Semglee before it was approved as an interchangeable biosimilar product) and 89% for filgrastim (IQVIA data; Figure ). Of the 35 FDA approved biosimilars as of April 2022, 19 are prescribed in oncology; 11 are cancer therapeutics (bevacizumab, rituximab, and trastuzumab), and 8 are supportive care products (epoetin alfa, filgrastim, and pegfilgrastim). The US market share of biosimilars for each of these products as of September 2021 ranged between 38% and 89%. For the treatment of inflammatory diseases in rheumatology, biosimilars to adalimumab, etanercept, infliximab, and rituximab have been approved by the FDA. However, as of April 13, 2022, only a fraction of FDA‐approved biosimilars for inflammatory diseases are commercially available for use in clinical practice; none for adalimumab and etanercept (see Table ). Despite the early availability of infliximab and rituximab biosimilars in rheumatology (the first infliximab biosimilar, Inflectra, launched in 2016 in the United States), rheumatologists have been slow to prescribe biosimilars. Adoption is growing, though; as of September 2021, the market share of biosimilars in rheumatology (rituximab and infliximab combined) was nearing 32% (IQVIA SMART Data Analytics Platform). For the treatment of inflammatory bowel disease (IBD), an umbrella term for Crohn's disease and ulcerative colitis, two inflammatory conditions of the gastrointestinal tract, the FDA has approved biosimilars for two tumor necrosis factor (TNF) alpha inhibitors: infliximab and adalimumab. However, all FDA‐approved adalimumab biosimilars and one FDA‐approved infliximab biosimilar have yet to launch as of April 2022. The relatively modest share of biosimilars in the US infliximab market (32% all indications combined) reflects a slow uptake of infliximab biosimilars for IBD treatment. Insulin is a biologic product class for which a wealth of clinical experience exists, with the marketing of the first recombinant human insulin product, Humulin, in 1982. Exogenous insulins are available in different mixtures, concentrations, and routes of administration; collectively called insulin products. Worldwide, the market of insulin products has grown significantly, reflecting the increased prevalence of type 2 diabetes (T2D), estimated to affect 510.8 million individuals by 2030. Insulin products used to be treated as drugs for regulatory purposes (i.e., approved as new drug applications (NDAs)). Novel insulin products were approved via the 505(b) (1) pathway, and follow‐on products approved via the 505(b) (2) pathway. In March 2020, insulin products were officially transitioned to the biologic regulatory approval pathway and the insulin products approved under NDAs were all deemed biologic license applications (BLAs). This transition enables the submission of applications for products that are proposed as insulin biosimilar or interchangeable products. , Because of the recent transition of insulin products to the BLA approval pathway, only two FDA‐approved products biosimilar to insulin glargine are available: Semglee (insulin glargine‐yfgn), which obtained interchangeability designation in 2021, and Rezvoglar (insulin glargine‐aglr), which was approved in 2021 and is not yet commercially available (see Table ). As shown in Figure , Semglee represented only 3% of the insulin glargine market in September 2021, which likely reflected sales of Semglee upon its original approval as a 505(b) (2) NDA in August 2020, because the interchangeable Semglee was only approved in July 2021. Although considered a follow‐on biologic rather than a biosimilar, it should be noted that an additional insulin glargine product, Basaglar, was approved in 2015 under the 505 (2) regulatory pathway. In clinical practice, this formulation is used in much the same fashion as a biosimilar. By 2018, this insulin preparation constituted 44% of use of insulin glargine for persons with diabetes having Medicaid insurance. Biologics only account for 2% of all prescriptions in the United States, yet represent 43% of invoice‐level medicine spending, reaching $211 billion in 2019, and growing at a 14.6% compound annual growth rate (CAGR) since 2015 (i.e., more than twice as fast as the rate for the total market comprising small molecules, biologics, and biosimilars). Oncology is particularly affected by the high cost of biologics. In 2018, oncology products accounted for 6 of the 10 most costly biologics covered under Medicare Part B. Insulin prices in the United States are also high, and have increased exponentially in the past 2 decades. In 2020, annual insulin sales worldwide amounted to $19 billion, with $7 billion in US sales alone (i.e., close to a third of the worldwide market). In fact, US insulin cost far exceeds the average insulin cost outside the United States. Long‐acting and rapid‐acting insulin products in the United States cost 8–20 times and 5–10 times, respectively, more than outside the United States. SSR Health data show that the list price per unit of insulin glargine products has reached over 20 cents per unit (from 10 cents per unit in 2010); the average out‐of‐pocket price per unit after rebate is around five cents. Sales of biosimilars, priced 11–45% lower than their innovator counterparts, have the potential to achieve considerable savings for payers (see Figure ). Since the passage of the BPCI Act, $17 billion of biosimilar spending has been associated with $37 billion in savings, despite heterogenous adoption. Between 2020 and 2024, biosimilar sales are expected to result in $109 billion in savings, as newly approved biosimilars launch and existing biosimilars see continued uptake and price reductions. This section is based on Dr. Lyman's, Dr. Gibofsky's, Dr. Lichstenstein's, and Dr. Bloomgarden's workshop presentations. Reduced commercial availability of biosimilars: The patent dance Regulatory approval does not necessarily lead to commercial availability because manufacturers of the innovator biologic often use patent infringement litigation to delay the marketing of a recently approved biosimilar following a time‐consuming patent dispute resolution process outlined in the BPCI Act known as the “patent dance.” By the time a biosimilar manufacturer can legally obtain FDA approval (12 years after the date of first licensure of the innovator used as reference), the innovator manufacturer will have filed multiple secondary patents aimed to extend the innovator's exclusivity period. During the patent dance's first wave of litigation (before FDA approval of the biosimilar), the biosimilar and innovator manufacturers exchange confidential information to negotiate the scope of the patents that the innovator manufacturer will litigate. If some of the patents that the innovator manufacturer wishes to litigate have not expired, the biosimilar manufacturer will have to successfully demonstrate invalidity, unenforceability, or non‐infringement to be able to market its product before the relevant patents have expired. During the second wave of litigation, which begins with FDA approval of the biosimilar, the innovator manufacturer can assert any patent from the original list. Resolution can be obtained through settlement or litigation, resulting in either a delay or no delay in the launch of the biosimilar. Of the 35 biosimilars approved by FDA as of April 2022, there were 10 that had delayed launches primarily due to patent litigation between innovator and biosimilar manufacturers ( Table ). Rheumatology is particularly affected by delays in the launch of FDA‐approved biosimilars. Adalimumab biosimilars have been available in most European Union countries since October 2018 (the year the primary European patent for the RP expired). In the United States, although the primary patent for the innovator expired in 2016, the launch of adalimumab biosimilars is scheduled for 2023 due to a prolonged patent dance involving formulation and dosage patents. Nine adalimumab biosimilars, including 7 already approved and 2 in late‐stage development, are expected to launch in 2023 in the United States upon settlement between the innovator manufacturer and individual adalimumab biosimilar developers. The development and approval of oncology biosimilars should also benefit from the upcoming expiration of patents on biologics, as patents on 20 oncology biologics are set to expire by 2023. Prescribers' lack of familiarity with the biosimilar development paradigm and regulatory standards for biosimilar approval Numerous surveys have documented the concerns of healthcare providers over the efficacy and safety of biosimilars compared with innovator biologics, exposing the fact that physician and patient understanding of biosimilars remains insufficient. , These surveys link those concerns with a lack of familiarity with the specifics of the biosimilar regulatory approval process. For instance, unlike the development of innovator biologics which relies on clinical data to establish safety and efficacy of the innovator product, biosimilar development has limited reliance on clinical data for comparison between the biosimilar and innovator. In oncology, these concerns are likely impacting the uptake of the more recently approved cancer treatment biosimilars, which are not being adopted as rapidly and robustly as have been supportive care biosimilars. Another specificity of the biosimilar development program is the concept of extrapolation of indications, which involves conducting studies in a patient population considered most sensitive to detect clinically meaningful differences between the proposed biosimilar and the innovator, to support approval of the biosimilar for other approved conditions of use of the innovator; this waives the need to conduct CCS in each of the indications approved for the innovator. To support extrapolation, scientific justifications are expected for each additional indication. Examples of justifications include same mechanism of action in tested and non‐tested conditions of use, similar PK and biodistribution, and similar safety profiles, including immunogenicity. Extrapolation of indications is a concern for clinicians across medical specialties. Many oncologists are reluctant to prescribe a biosimilar in an indication that has not been clinically evaluated. Gastroenterologists managing patients with IBD also experience concerns over extrapolation of indications. The infliximab biosimilar CT‐P13 (infliximab‐dyyb) was approved for use in IBD based on the extrapolation of data obtained from studies conducted in rheumatoid arthritis (RA) and ankylosing spondylitis (AS), which has raised concerns among gastroenterologists. , Concerns about potential immunogenicity impact on clinical outcomes Immunogenicity is another concern over biosimilar use, although it can affect all biologics. Immunogenicity can be defined as an unwanted immune response to a biologic product that has the potential to affect the product's PK, PD, safety, and efficacy. Immunogenicity can lead to neutralization of administered product and loss of efficacy, cross reactivity with endogenous counterpart for certain products, and general immune responses, such as allergies and anaphylaxis. Due to the seriousness of these potential consequences, comparative immunogenicity assessment is a key element of biosimilar development programs that provides head‐to‐head comparison of anti‐drug antibody (ADA) incidence for the biosimilar product vs. the innovator. Clinicians' concerns over immunogenicity stem from a lack of consensus as to the clinical significance of immunogenicity in the context of biosimilar development, and the limitations of the immunogenicity assessments conducted pre‐marketing, which do not reflect real‐world experience, which entails long exposure to the product, and may involve switches between products. Administrative burden of prescribing biosimilars Another barrier to biosimilar adoption is reimbursement. Many commercial payers list a preferred originator or biosimilar biologic product and require additional prior authorization steps for reimbursement of non‐preferred formulary products, which creates an additional administrative burden for providers. Biosimilar products are more susceptible to changes in formulary status (from preferred to non‐preferred) than their innovator counterparts. Preferred products vary from payer to payer, and a change in payer or a change in a payer's formulary may necessitate switching from one biosimilar to another during an ongoing treatment course. These payer‐mandated switches are time‐consuming for providers and concerning to both providers and patients. A lengthy conversation is often necessary to provide the patient with clarity and reassurance. Finally, in small practices, the logistical burden of stocking multiple biosimilars within a class due to varying payers' requirements can be challenging. Substitution without provider intervention The view in the clinical community is that the decision to switch a patient from an innovator biologic to a biosimilar should always be a clinical decision made by the treating provider on an individual patient basis, supported by scientific evidence and with patient awareness. The possibility for a biosimilar manufacturer to seek interchangeability designation is perceived by some clinicians across medical specialties as an infringement upon the decision‐making power of the provider, as interchangeability allows for the substitution of the interchangeable biosimilar product with the innovator without provider intervention. Concerns over the possibility of automatic substitution are particularly high among rheumatologists and gastroenterologists. Changing and switching constitute rheumatologists' biggest concerns over biosimilars. In its 2018 position paper, the American College of Rheumatology (ACR), defines changing as the intentional therapeutic alteration that is initiated by a healthcare provider in partnership with the patient. Changing can be motivated by economic reasons (non‐medical changing) or medical reasons when a patient is not responding to a product. The ACR reserves the term switching for the transition to or from a biosimilar which has been approved as interchangeable. Concerns over non‐medical switching in rheumatology have been captured in a recent survey of 320 board‐certified US rheumatologists. These concerns are exacerbated when it comes to switching patients that have been stabilized on the innovator product. A majority of rheumatologists polled (65%) stated that they would be unlikely to switch a patient stabilized on the innovator product, to a biosimilar. In the gastroenterology community, IBD is particularly difficult to manage due to the inconsistency of clinical manifestations, unpredictable outcome of a given therapeutic intervention and need for long‐term monitoring to prevent flare‐ups, and high variability between patients. Switching between products would add a layer of uncertainty in the treatment course of patients known to be difficult to treat. Physicians' concerns over substitution need to be nuanced by the fact that substitution at the pharmacy level for FDA‐designated interchangeable products is regulated by state legislation in most of the United States. State laws require almost uniformly that (1) the pharmacists notify both the provider and patient that a substitution has been made; (2) the pharmacist and prescriber retain records of substituted biologic medications; and (3) legislation provide immunity for the pharmacists making the substitution in compliance with state laws. Lack of incentive for clinicians to prescribe biosimilars (modest cost savings for the patient, high administrative burden) Although prescribing lower cost biosimilars will save money to the payer (health insurance provider), the cost savings associated with biosimilars may not be directly passed down to patients. Innovator companies negotiate substantial rebates with pharmacy benefit managers that often offset the difference between the listed prices of a biosimilar and its innovator counterpart. This issue is of particular significance for biosimilars prescribed in rheumatology and for biosimilar insulins. A Johns Hopkins study reported that patients prescribed an infliximab biosimilar ultimately paid 12% less out of pocket than they would pay for the innovator biologic—to be compared with a 45% saving on out‐of‐pocket cost when using a filgrastim biosimilar instead of the innovator. Regarding insulin products, the interplay between list prices (i.e., prices uninsured patients would pay) and rebates is such that high list prices do not necessarily reflect the net out‐of‐pocket cost for a patient. A higher wholesale acquisition cost (WAC) may be associated with a lower net price for payers—and ultimately better market penetration—if deeper rebates are applied to a product with a high WAC. Semglee has been made available both as an unbranded insulin glargine product and as a brand‐name product. The unbranded version of Semglee (marketed as Insulin Glargine (Insulin glargine‐yfgn)) is priced 65% lower than the innovator Lantus. By contrast, the branded version of Semglee is not significantly less expensive than the innovator Lantus (WAC of $404.04 per package of five 3‐mL pens for branded Semglee vs. WAC of $425.31 per package of five 3‐mL pens for Lantus). Branded Semglee, however, has a considerably larger market share than the unbranded insulin glargine formulation. Because of the way that net prices are negotiated, the extent of cost‐savings that can be achieved with the introduction of biosimilar insulins remains to be evaluated. In addition to modest cost differences for the patient, formulary exclusions on biosimilars achieved by innovator companies further disincentivize the prescription of biosimilars. A March 2020 report from the Health and Human Services Office of Inspector General based on an analysis of Medicare Part D formularies in 2019, revealed significant gaps in formulary coverage of commercially available biosimilars. Whereas the net effect of formulary exclusions on biosimilars and price rebates on the RP is still cost reductions, these practices disincentivize the development of biosimilars. Variability and drift: A concern for oncologists Like innovator biologics, biosimilars are subject to multiple changes to their manufacturing process after their initial approval. These manufacturing changes have the potential to result in changes in product quality attributes over time, a phenomenon known as “drift” that is closely monitored by regulatory authorities. Any proposed changes are reviewed for their potential impact on safety or efficacy of the product, and manufacturers are expected to routinely control for batch‐to‐batch consistency using advanced analytic methods. The theoretical impact of drift on product efficacy and safety became a very tangible issue for oncologists during the development of SB3, a biosimilar to the anti‐human epidermal growth factor receptor 2 protein (HER2) monoclonal antibody (mAb) trastuzumab, indicated for HER2‐positive (HER2+) breast cancer. Antibody‐dependent cellular cytotoxicity (ADCC) is one of several critical quality attributes of trastuzumab and a key component of trastuzumab's mechanism of action. The CCS conducted as part of the development program of SB3 (NCT02149524) using breast pathologic complete response (bpCR) upon completion of neoadjuvant therapy and surgery as the primary end point, revealed a 10.7% risk difference in bpCR rates in favor of SB3. Three‐ and 4‐year follow‐up data from the 5‐year treatment‐free extension study conducted to assess cardiac safety of SB3 revealed significant differences in event‐free survival (EFS) and overall survival (OS) in favor of SB3. , Analytical studies conducted on multiple lots of the European Union‐ and US‐sourced innovators revealed two periods of drift with regards to certain attributes, including ADCC. A post hoc analysis identified ADCC activity and bpCR as the only factors associated with EFS. ADCC activity was designated according to whether patients treated with the innovators were exposed to a trastuzumab lot with drifted ADCC activity. Overall, about 50% of the trastuzumab innovator lots were classified as having a drift in ADCC activity. There was insufficient power to test the hypothesis of a relationship between ADCC activity and EFS, but the 3‐year EFS rate was higher in patients not exposed to the drifted trastuzumab innovators (92.7%) than in those exposed to the drifted product (81.7%). However, EFS curves for SB3 and the non‐drifted trastuzumab innovators appeared superimposable, pointing to drift as the cause for the apparent innovator inferiority in the SB3 extension study. This particular case illustrates drift on the innovator product, but drift can impact all biologics, including biosimilars. Unlike small molecules, biologics need to be closely monitored for potential drift. The nocebo effect and lack of patient awareness on biosimilars: A concern for gastroenterologists treating IBD Patients undergoing a non‐medical switch from an innovator biologic may experience an unexplained, unfavorable therapeutic effect after the switch, that can be reverted after re‐initiating the innovator. This phenomenon is known as the nocebo effect. , The nocebo effect can lead to poor clinical outcomes or adverse events (AEs) not associated with the specific pharmacologic action of the product and has been shown to interfere with outcomes in patients with IBD. As such, the nocebo effect may play an important role in the higher‐than expected discontinuation rates reported in patients with immune‐mediated disease who switched to the biosimilar CT‐P13 after being stabilized on an innovator treatment. Queiroz et al . conducted a meta‐analysis of 30 observational studies involving a total of 3,594 patients with IBD to examine the impact of an innovator to biosimilar switch on discontinuation rates over time. All studies in the dataset included a minimum post‐switch follow‐up of > 6 months or 3 infusions. Drug discontinuation rates were monitored at 6, 12, and 24 months and disease worsening remission, loss of adherence, AEs, and loss of response were the main reported reasons for discontinuation. The discontinuation rates in switched patients were found to be comparable to those observed in patients treated only with the innovator in historic cohorts. Other studies have shown different results. An observational study was conducted on 125 patients, 101 of whom with IBD. All participants were informed of the therapy expectation following a possible non‐medical switch by written documentation and oral communication with the treating provider and agreed to transition from the infliximab innovator to a biosimilar. Although there were no significant longitudinal changes in disease activity, PK, or laboratory outcomes, 12.8% of all switched patients experienced AEs such as feelings of diminished effect, chills during the infusion, numbness, and new onset headache that were identified as nocebo responses. The inconsistency of findings from the studies that have examined a potential impact of the nocebo effect post hoc , highlights the need to investigate the nocebo effect prospectively and more systematically. Partially linked to the nocebo effect is the lack of familiarity with biosimilars among patients with IBD, which can participate in triggering negative expectations. A 2018 European survey of 1,619 patients with IBD revealed contrasting views on biosimilars among patients. Less than 50% of the polled patients had heard of biosimilars, and among those, 50% worried about the biosimilar being less effective than the innovator, and 46% expressed concerns about the biosimilar safety profile. The low patient awareness and unfavorable perceptions of biosimilars reported in this survey highlight the need for patient education, which could improve biosimilar adoption. Barriers specific to the adoption of biosimilar insulin products The need for insulin individualized dosing Approved insulin products broadly fall under four categories: rapid‐acting (e.g., insulin lispro and insulin aspart), short‐acting (e.g., regular human insulin), intermediate‐acting (neutral protamine hagedorn (NPH) insulin), and long‐acting insulins (e.g., insulin detemir, insulin glargine, and insulin degludec). These categories are associated with differences in the PK and PD profiles of insulin products, and the PK and PD profiles could differ among products in the same category. Goldman et al . compared PK and PD profiles of several basal insulin products routinely used in clinical practice. Results illustrated that understanding the differences in PK parameters (systemic concentrations and half‐lives) and PD parameters (onset of effect, and duration of action) across products, and patients, is critical. A broad adoption of insulin biosimilars will depend on the availability of biosimilar products with a wide range of PD profiles. Insulin dosing is highly individualized, and in each patient the insulin requirements vary significantly throughout the day, as has been demonstrated in both type 1 diabetes (T1D) and T2D. , As such, intermittent and continuous glucose monitoring will remain a key component of clinical management for patients receiving biosimilar insulins, as it is for all categories of insulin products. Patient interactions with insulin delivery devices Insulin products are self‐administered, mostly using pen injector devices, which is an important difference with the biosimilar products used in other disease areas. Additionally, patients may not be delivering the same dose of insulin each time, because their insulin dose may vary throughout the day depending on their blood glucose level. Therefore, the interface between the patient and insulin delivery device is extremely important, and device features (e.g., color coding, injection force, and dose range) need to be carefully considered in terms of the potential impact on adherence, safety, and effectiveness. Heinemann et al . highlight the disruptive effects of a pen device change on patients with diabetes; suggesting that device differences might be more of a concern for patients than an insulin product change. Overemphasis on interchangeability The FDA's interchangeability guidance of 2019 requires an assessment of the impact of switching or alternating between use of the proposed interchangeable product and RP on clinical PK, PD (if applicable), immunogenicity, and safety as a condition for obtaining an interchangeability designation. However, the immunogenicity and interchangeability assessment may be waived for insulin biosimilars if analytical similarity has been demonstrated. This may catalyze the approval of insulin products with an interchangeable designation. Education will be important to ensure that the public understands that interchangeability is a designation defined by statute that allows for pharmacy‐level substitution (subject to state law in the United States). Regulatory approval does not necessarily lead to commercial availability because manufacturers of the innovator biologic often use patent infringement litigation to delay the marketing of a recently approved biosimilar following a time‐consuming patent dispute resolution process outlined in the BPCI Act known as the “patent dance.” By the time a biosimilar manufacturer can legally obtain FDA approval (12 years after the date of first licensure of the innovator used as reference), the innovator manufacturer will have filed multiple secondary patents aimed to extend the innovator's exclusivity period. During the patent dance's first wave of litigation (before FDA approval of the biosimilar), the biosimilar and innovator manufacturers exchange confidential information to negotiate the scope of the patents that the innovator manufacturer will litigate. If some of the patents that the innovator manufacturer wishes to litigate have not expired, the biosimilar manufacturer will have to successfully demonstrate invalidity, unenforceability, or non‐infringement to be able to market its product before the relevant patents have expired. During the second wave of litigation, which begins with FDA approval of the biosimilar, the innovator manufacturer can assert any patent from the original list. Resolution can be obtained through settlement or litigation, resulting in either a delay or no delay in the launch of the biosimilar. Of the 35 biosimilars approved by FDA as of April 2022, there were 10 that had delayed launches primarily due to patent litigation between innovator and biosimilar manufacturers ( Table ). Rheumatology is particularly affected by delays in the launch of FDA‐approved biosimilars. Adalimumab biosimilars have been available in most European Union countries since October 2018 (the year the primary European patent for the RP expired). In the United States, although the primary patent for the innovator expired in 2016, the launch of adalimumab biosimilars is scheduled for 2023 due to a prolonged patent dance involving formulation and dosage patents. Nine adalimumab biosimilars, including 7 already approved and 2 in late‐stage development, are expected to launch in 2023 in the United States upon settlement between the innovator manufacturer and individual adalimumab biosimilar developers. The development and approval of oncology biosimilars should also benefit from the upcoming expiration of patents on biologics, as patents on 20 oncology biologics are set to expire by 2023. Numerous surveys have documented the concerns of healthcare providers over the efficacy and safety of biosimilars compared with innovator biologics, exposing the fact that physician and patient understanding of biosimilars remains insufficient. , These surveys link those concerns with a lack of familiarity with the specifics of the biosimilar regulatory approval process. For instance, unlike the development of innovator biologics which relies on clinical data to establish safety and efficacy of the innovator product, biosimilar development has limited reliance on clinical data for comparison between the biosimilar and innovator. In oncology, these concerns are likely impacting the uptake of the more recently approved cancer treatment biosimilars, which are not being adopted as rapidly and robustly as have been supportive care biosimilars. Another specificity of the biosimilar development program is the concept of extrapolation of indications, which involves conducting studies in a patient population considered most sensitive to detect clinically meaningful differences between the proposed biosimilar and the innovator, to support approval of the biosimilar for other approved conditions of use of the innovator; this waives the need to conduct CCS in each of the indications approved for the innovator. To support extrapolation, scientific justifications are expected for each additional indication. Examples of justifications include same mechanism of action in tested and non‐tested conditions of use, similar PK and biodistribution, and similar safety profiles, including immunogenicity. Extrapolation of indications is a concern for clinicians across medical specialties. Many oncologists are reluctant to prescribe a biosimilar in an indication that has not been clinically evaluated. Gastroenterologists managing patients with IBD also experience concerns over extrapolation of indications. The infliximab biosimilar CT‐P13 (infliximab‐dyyb) was approved for use in IBD based on the extrapolation of data obtained from studies conducted in rheumatoid arthritis (RA) and ankylosing spondylitis (AS), which has raised concerns among gastroenterologists. , Immunogenicity is another concern over biosimilar use, although it can affect all biologics. Immunogenicity can be defined as an unwanted immune response to a biologic product that has the potential to affect the product's PK, PD, safety, and efficacy. Immunogenicity can lead to neutralization of administered product and loss of efficacy, cross reactivity with endogenous counterpart for certain products, and general immune responses, such as allergies and anaphylaxis. Due to the seriousness of these potential consequences, comparative immunogenicity assessment is a key element of biosimilar development programs that provides head‐to‐head comparison of anti‐drug antibody (ADA) incidence for the biosimilar product vs. the innovator. Clinicians' concerns over immunogenicity stem from a lack of consensus as to the clinical significance of immunogenicity in the context of biosimilar development, and the limitations of the immunogenicity assessments conducted pre‐marketing, which do not reflect real‐world experience, which entails long exposure to the product, and may involve switches between products. Another barrier to biosimilar adoption is reimbursement. Many commercial payers list a preferred originator or biosimilar biologic product and require additional prior authorization steps for reimbursement of non‐preferred formulary products, which creates an additional administrative burden for providers. Biosimilar products are more susceptible to changes in formulary status (from preferred to non‐preferred) than their innovator counterparts. Preferred products vary from payer to payer, and a change in payer or a change in a payer's formulary may necessitate switching from one biosimilar to another during an ongoing treatment course. These payer‐mandated switches are time‐consuming for providers and concerning to both providers and patients. A lengthy conversation is often necessary to provide the patient with clarity and reassurance. Finally, in small practices, the logistical burden of stocking multiple biosimilars within a class due to varying payers' requirements can be challenging. The view in the clinical community is that the decision to switch a patient from an innovator biologic to a biosimilar should always be a clinical decision made by the treating provider on an individual patient basis, supported by scientific evidence and with patient awareness. The possibility for a biosimilar manufacturer to seek interchangeability designation is perceived by some clinicians across medical specialties as an infringement upon the decision‐making power of the provider, as interchangeability allows for the substitution of the interchangeable biosimilar product with the innovator without provider intervention. Concerns over the possibility of automatic substitution are particularly high among rheumatologists and gastroenterologists. Changing and switching constitute rheumatologists' biggest concerns over biosimilars. In its 2018 position paper, the American College of Rheumatology (ACR), defines changing as the intentional therapeutic alteration that is initiated by a healthcare provider in partnership with the patient. Changing can be motivated by economic reasons (non‐medical changing) or medical reasons when a patient is not responding to a product. The ACR reserves the term switching for the transition to or from a biosimilar which has been approved as interchangeable. Concerns over non‐medical switching in rheumatology have been captured in a recent survey of 320 board‐certified US rheumatologists. These concerns are exacerbated when it comes to switching patients that have been stabilized on the innovator product. A majority of rheumatologists polled (65%) stated that they would be unlikely to switch a patient stabilized on the innovator product, to a biosimilar. In the gastroenterology community, IBD is particularly difficult to manage due to the inconsistency of clinical manifestations, unpredictable outcome of a given therapeutic intervention and need for long‐term monitoring to prevent flare‐ups, and high variability between patients. Switching between products would add a layer of uncertainty in the treatment course of patients known to be difficult to treat. Physicians' concerns over substitution need to be nuanced by the fact that substitution at the pharmacy level for FDA‐designated interchangeable products is regulated by state legislation in most of the United States. State laws require almost uniformly that (1) the pharmacists notify both the provider and patient that a substitution has been made; (2) the pharmacist and prescriber retain records of substituted biologic medications; and (3) legislation provide immunity for the pharmacists making the substitution in compliance with state laws. Although prescribing lower cost biosimilars will save money to the payer (health insurance provider), the cost savings associated with biosimilars may not be directly passed down to patients. Innovator companies negotiate substantial rebates with pharmacy benefit managers that often offset the difference between the listed prices of a biosimilar and its innovator counterpart. This issue is of particular significance for biosimilars prescribed in rheumatology and for biosimilar insulins. A Johns Hopkins study reported that patients prescribed an infliximab biosimilar ultimately paid 12% less out of pocket than they would pay for the innovator biologic—to be compared with a 45% saving on out‐of‐pocket cost when using a filgrastim biosimilar instead of the innovator. Regarding insulin products, the interplay between list prices (i.e., prices uninsured patients would pay) and rebates is such that high list prices do not necessarily reflect the net out‐of‐pocket cost for a patient. A higher wholesale acquisition cost (WAC) may be associated with a lower net price for payers—and ultimately better market penetration—if deeper rebates are applied to a product with a high WAC. Semglee has been made available both as an unbranded insulin glargine product and as a brand‐name product. The unbranded version of Semglee (marketed as Insulin Glargine (Insulin glargine‐yfgn)) is priced 65% lower than the innovator Lantus. By contrast, the branded version of Semglee is not significantly less expensive than the innovator Lantus (WAC of $404.04 per package of five 3‐mL pens for branded Semglee vs. WAC of $425.31 per package of five 3‐mL pens for Lantus). Branded Semglee, however, has a considerably larger market share than the unbranded insulin glargine formulation. Because of the way that net prices are negotiated, the extent of cost‐savings that can be achieved with the introduction of biosimilar insulins remains to be evaluated. In addition to modest cost differences for the patient, formulary exclusions on biosimilars achieved by innovator companies further disincentivize the prescription of biosimilars. A March 2020 report from the Health and Human Services Office of Inspector General based on an analysis of Medicare Part D formularies in 2019, revealed significant gaps in formulary coverage of commercially available biosimilars. Whereas the net effect of formulary exclusions on biosimilars and price rebates on the RP is still cost reductions, these practices disincentivize the development of biosimilars. Like innovator biologics, biosimilars are subject to multiple changes to their manufacturing process after their initial approval. These manufacturing changes have the potential to result in changes in product quality attributes over time, a phenomenon known as “drift” that is closely monitored by regulatory authorities. Any proposed changes are reviewed for their potential impact on safety or efficacy of the product, and manufacturers are expected to routinely control for batch‐to‐batch consistency using advanced analytic methods. The theoretical impact of drift on product efficacy and safety became a very tangible issue for oncologists during the development of SB3, a biosimilar to the anti‐human epidermal growth factor receptor 2 protein (HER2) monoclonal antibody (mAb) trastuzumab, indicated for HER2‐positive (HER2+) breast cancer. Antibody‐dependent cellular cytotoxicity (ADCC) is one of several critical quality attributes of trastuzumab and a key component of trastuzumab's mechanism of action. The CCS conducted as part of the development program of SB3 (NCT02149524) using breast pathologic complete response (bpCR) upon completion of neoadjuvant therapy and surgery as the primary end point, revealed a 10.7% risk difference in bpCR rates in favor of SB3. Three‐ and 4‐year follow‐up data from the 5‐year treatment‐free extension study conducted to assess cardiac safety of SB3 revealed significant differences in event‐free survival (EFS) and overall survival (OS) in favor of SB3. , Analytical studies conducted on multiple lots of the European Union‐ and US‐sourced innovators revealed two periods of drift with regards to certain attributes, including ADCC. A post hoc analysis identified ADCC activity and bpCR as the only factors associated with EFS. ADCC activity was designated according to whether patients treated with the innovators were exposed to a trastuzumab lot with drifted ADCC activity. Overall, about 50% of the trastuzumab innovator lots were classified as having a drift in ADCC activity. There was insufficient power to test the hypothesis of a relationship between ADCC activity and EFS, but the 3‐year EFS rate was higher in patients not exposed to the drifted trastuzumab innovators (92.7%) than in those exposed to the drifted product (81.7%). However, EFS curves for SB3 and the non‐drifted trastuzumab innovators appeared superimposable, pointing to drift as the cause for the apparent innovator inferiority in the SB3 extension study. This particular case illustrates drift on the innovator product, but drift can impact all biologics, including biosimilars. Unlike small molecules, biologics need to be closely monitored for potential drift. IBD Patients undergoing a non‐medical switch from an innovator biologic may experience an unexplained, unfavorable therapeutic effect after the switch, that can be reverted after re‐initiating the innovator. This phenomenon is known as the nocebo effect. , The nocebo effect can lead to poor clinical outcomes or adverse events (AEs) not associated with the specific pharmacologic action of the product and has been shown to interfere with outcomes in patients with IBD. As such, the nocebo effect may play an important role in the higher‐than expected discontinuation rates reported in patients with immune‐mediated disease who switched to the biosimilar CT‐P13 after being stabilized on an innovator treatment. Queiroz et al . conducted a meta‐analysis of 30 observational studies involving a total of 3,594 patients with IBD to examine the impact of an innovator to biosimilar switch on discontinuation rates over time. All studies in the dataset included a minimum post‐switch follow‐up of > 6 months or 3 infusions. Drug discontinuation rates were monitored at 6, 12, and 24 months and disease worsening remission, loss of adherence, AEs, and loss of response were the main reported reasons for discontinuation. The discontinuation rates in switched patients were found to be comparable to those observed in patients treated only with the innovator in historic cohorts. Other studies have shown different results. An observational study was conducted on 125 patients, 101 of whom with IBD. All participants were informed of the therapy expectation following a possible non‐medical switch by written documentation and oral communication with the treating provider and agreed to transition from the infliximab innovator to a biosimilar. Although there were no significant longitudinal changes in disease activity, PK, or laboratory outcomes, 12.8% of all switched patients experienced AEs such as feelings of diminished effect, chills during the infusion, numbness, and new onset headache that were identified as nocebo responses. The inconsistency of findings from the studies that have examined a potential impact of the nocebo effect post hoc , highlights the need to investigate the nocebo effect prospectively and more systematically. Partially linked to the nocebo effect is the lack of familiarity with biosimilars among patients with IBD, which can participate in triggering negative expectations. A 2018 European survey of 1,619 patients with IBD revealed contrasting views on biosimilars among patients. Less than 50% of the polled patients had heard of biosimilars, and among those, 50% worried about the biosimilar being less effective than the innovator, and 46% expressed concerns about the biosimilar safety profile. The low patient awareness and unfavorable perceptions of biosimilars reported in this survey highlight the need for patient education, which could improve biosimilar adoption. The need for insulin individualized dosing Approved insulin products broadly fall under four categories: rapid‐acting (e.g., insulin lispro and insulin aspart), short‐acting (e.g., regular human insulin), intermediate‐acting (neutral protamine hagedorn (NPH) insulin), and long‐acting insulins (e.g., insulin detemir, insulin glargine, and insulin degludec). These categories are associated with differences in the PK and PD profiles of insulin products, and the PK and PD profiles could differ among products in the same category. Goldman et al . compared PK and PD profiles of several basal insulin products routinely used in clinical practice. Results illustrated that understanding the differences in PK parameters (systemic concentrations and half‐lives) and PD parameters (onset of effect, and duration of action) across products, and patients, is critical. A broad adoption of insulin biosimilars will depend on the availability of biosimilar products with a wide range of PD profiles. Insulin dosing is highly individualized, and in each patient the insulin requirements vary significantly throughout the day, as has been demonstrated in both type 1 diabetes (T1D) and T2D. , As such, intermittent and continuous glucose monitoring will remain a key component of clinical management for patients receiving biosimilar insulins, as it is for all categories of insulin products. Patient interactions with insulin delivery devices Insulin products are self‐administered, mostly using pen injector devices, which is an important difference with the biosimilar products used in other disease areas. Additionally, patients may not be delivering the same dose of insulin each time, because their insulin dose may vary throughout the day depending on their blood glucose level. Therefore, the interface between the patient and insulin delivery device is extremely important, and device features (e.g., color coding, injection force, and dose range) need to be carefully considered in terms of the potential impact on adherence, safety, and effectiveness. Heinemann et al . highlight the disruptive effects of a pen device change on patients with diabetes; suggesting that device differences might be more of a concern for patients than an insulin product change. Overemphasis on interchangeability The FDA's interchangeability guidance of 2019 requires an assessment of the impact of switching or alternating between use of the proposed interchangeable product and RP on clinical PK, PD (if applicable), immunogenicity, and safety as a condition for obtaining an interchangeability designation. However, the immunogenicity and interchangeability assessment may be waived for insulin biosimilars if analytical similarity has been demonstrated. This may catalyze the approval of insulin products with an interchangeable designation. Education will be important to ensure that the public understands that interchangeability is a designation defined by statute that allows for pharmacy‐level substitution (subject to state law in the United States). Approved insulin products broadly fall under four categories: rapid‐acting (e.g., insulin lispro and insulin aspart), short‐acting (e.g., regular human insulin), intermediate‐acting (neutral protamine hagedorn (NPH) insulin), and long‐acting insulins (e.g., insulin detemir, insulin glargine, and insulin degludec). These categories are associated with differences in the PK and PD profiles of insulin products, and the PK and PD profiles could differ among products in the same category. Goldman et al . compared PK and PD profiles of several basal insulin products routinely used in clinical practice. Results illustrated that understanding the differences in PK parameters (systemic concentrations and half‐lives) and PD parameters (onset of effect, and duration of action) across products, and patients, is critical. A broad adoption of insulin biosimilars will depend on the availability of biosimilar products with a wide range of PD profiles. Insulin dosing is highly individualized, and in each patient the insulin requirements vary significantly throughout the day, as has been demonstrated in both type 1 diabetes (T1D) and T2D. , As such, intermittent and continuous glucose monitoring will remain a key component of clinical management for patients receiving biosimilar insulins, as it is for all categories of insulin products. Insulin products are self‐administered, mostly using pen injector devices, which is an important difference with the biosimilar products used in other disease areas. Additionally, patients may not be delivering the same dose of insulin each time, because their insulin dose may vary throughout the day depending on their blood glucose level. Therefore, the interface between the patient and insulin delivery device is extremely important, and device features (e.g., color coding, injection force, and dose range) need to be carefully considered in terms of the potential impact on adherence, safety, and effectiveness. Heinemann et al . highlight the disruptive effects of a pen device change on patients with diabetes; suggesting that device differences might be more of a concern for patients than an insulin product change. The FDA's interchangeability guidance of 2019 requires an assessment of the impact of switching or alternating between use of the proposed interchangeable product and RP on clinical PK, PD (if applicable), immunogenicity, and safety as a condition for obtaining an interchangeability designation. However, the immunogenicity and interchangeability assessment may be waived for insulin biosimilars if analytical similarity has been demonstrated. This may catalyze the approval of insulin products with an interchangeable designation. Education will be important to ensure that the public understands that interchangeability is a designation defined by statute that allows for pharmacy‐level substitution (subject to state law in the United States). This section is based on Dr. Lyman's, Dr. Gibofsky's, Dr. Lichstenstein's, and Dr. Bloomgarden's workshop presentations. Healthcare provider education and integration into clinical practice guidelines In a 2018 survey, 300 managed care and specialty pharmacy professionals were asked to identify barriers to biosimilar adoption and rate their identified barriers based on how difficult they thought these barriers would be to overcome. Many of the barriers identified as difficult to overcome could be addressed by education, including prescriber and patient concerns about biosimilar safety and efficacy. Respondents were also asked to rate strategies to overcome barriers to biosimilar adoption, and the highest rated strategies were for prescriber education about evidence from switching studies (91% of responses) and the FDA guidance on pharmacy‐level substitution of RP with biosimilars (90%). These survey results suggest that education efforts undertaken by the FDA and clinical societies are impactful and should be continued. Communications from clinical societies and integration of biosimilars into clinical practice guidelines will be key to a broader adoption of biosimilars. The American Society of Clinical Oncology (ASCO) issued a statement on biosimilars as early as 2013 (most recently updated in March 2022 ) covering the safety and efficacy of biosimilars, regulatory considerations, as well as the value of biosimilars, and resources for prescriber and patient education. , As an example, the ASCO has already integrated biosimilars into their supportive care guidelines, which may have been one of the drivers of a fairly high level of adoption of supportive care biosimilars. Integration of biosimilars into cancer treatment guidelines is underway and will contribute to enhancing biosimilar use for curative purposes. Similar efforts have been undertaken by rheumatology societies. In 2018, the ACR released a white paper on biosimilars, followed by an updated position statement encouraging clinicians to incorporate biosimilars in their practice. , Enhanced patient‐provider communication and the SHARE approach to clinical management Provider‐driven patient education can also help overcome barriers to biosimilar adoption across all medical specialties. In the diabetes space, ensuring that patients quickly become proficient in the use of the injection device associated with a particular biosimilar product will be critical to the adoption of insulin biosimilars. Similarly, ensuring that patients with diabetes better understand the rigor of the FDA regulatory requirements for approval of biosimilar and interchangeable insulin products may encourage patients to favor FDA‐approved biosimilar products over low‐cost products, such as those offered by illegitimate internet pharmacies that place patients at risk of poor‐quality medications that can result in AEs and poor diabetes control. The link between the type of information conveyed to the patient and nocebo effects is well‐documented and has led to the development of prevention strategies that involve screening of patients at risk for nocebo effects and appropriate patient‐clinician communication to prevent unnecessary negative expectations. Such communication could occur during the informed consent process, through providing procedural information, and at follow‐up assessments. Shared decision making between provider and patient can play a critical role in the successful integration of biosimilars into the management of patients with IBD. The Agency for Healthcare Research and Quality has developed a five‐step process for shared decision making that includes exploring and comparing the benefits, harms, and risks of each option through meaningful dialogue about what matters most to the patient. Post‐market surveillance/real‐world data on the efficacy and safety of biosimilars Clinicians across specialties are interested in real‐world data about the performance of biosimilars compared with innovator biologics. The accumulation of data from controlled clinical studies and real‐world clinical practice will address concerns over indication of extrapolation and switching and alternating from an innovator to a biosimilar. The availability of these data relies on the availability of registries. In oncology, the Dutch Cancer registry, a nationwide, population‐based registry, has yielded reassuring results regarding the performance of cancer care biosimilars. Being one of the first to adopt European Medicines Agency (EMA)‐approved rituximab biosimilars (R‐biosimilars) for the treatment of diffuse large B‐cell lymphoma (DLBCL), the Dutch government sought to compare OS in patients with DLBCL receiving R‐biosimilars vs. the originator (R‐originator). Data from thousands of patients with DLBCL treated with at least one cycle of rituximab products between 2014 and 2018 were analyzed using 3‐year OS as the primary end point. The study concluded that the 3‐year OS did not differ between patients with DLBCL treated with R‐biosimilars and those treated with the R‐originator. By the end of 2018, 91% of rituximab purchased in the Netherlands were biosimilars, accounting for a 43% reduction in annual costs for the Dutch government. In rheumatology and gastroenterology, the real‐world clinical data accrued so far in both treatment‐naïve and switched patients, do not substantiate the concerns over switching from an innovator biologic to a biosimilar in the management of immune‐mediated inflammatory diseases. The NOR‐SWITCH study is an often‐cited switching study. It is a 52‐week randomized noninferiority phase IV trial conducted in Norway in patients with IBD, and in patients with immune‐mediated diseases in rheumatology, namely AS, RA, psoriatic arthritis, and psoriasis, treated with infliximab. The study compared the clinical outcomes of patients switched once from the innovator to a biosimilar infliximab (CT‐P13/ infliximab‐dyyb) and patients who remained on the innovator. The primary end point for all patient populations was disease worsening during the 52‐week follow‐up. In the aggregated study population that included all 6 conditions of use, disease worsening occurred in 53 (26%) patients in the innovator group and 61 (30%) patients in the biosimilar group, which supports a conclusion of biosimilar noninferiority according to the prespecified noninferiority margin of 15%. The frequency of AEs was similar between the two groups as well. With regards to the use of biosimilars in rheumatology, the conclusions from the NOR‐SWITCH study have been confirmed by real‐world data collected in the Danish DANBIO registry, a nationwide quality registry that collected data from a mandated non‐medical switch, where cohorts of patients with RA, psoriatic arthritis, and AS underwent a single switch from the infliximab originator to an infliximab biosimilar (infliximab‐dyyb). Clinical outcomes were compared 3 months before and after the switch from the innovator to the biosimilar. In all three cohorts, similar disease activity was observed pre‐ and post‐switch for all clinical measures. With regard to IBD management, the results of NOR‐SWITCH are supported by a multicenter phase III study conducted in patients with moderate to severe active bio‐naïve Crohn's disease over 54 weeks. Patients were randomized 1:1 to the innovator or CT‐P13 until week 30, then each treatment group was randomized 1:1 to either remaining on the same treatment or switching, resulting in 4 maintenance regimens (innovator to innovator, innovator to CT‐P13, CT‐P13 to CT‐P13, and CT‐P13 to innovator) until week 54. The proportion of patients with a decrease of 70 points or more in the Crohn's Disease Activity Index (CDAI) from baseline was the clinical end point. Overall, the efficacy and safety study results did not suggest differences of clinical outcomes in patients treated with the biosimilar vs. the innovator, or between groups that underwent a single switch at week 30 vs. those that remained on the same treatment throughout 54 weeks. The accumulation of data on insulin biosimilar and interchangeable products can also contribute to mitigate concerns regarding these products. A meta‐analysis of 14 randomized controlled trials (RCTs) with 6,188 patients was conducted to ascertain the efficacy, safety, and immunogenicity of biosimilar insulin products compared with their innovator counterparts. This study showed that there was no difference between the biosimilar products and the innovators in HbA1c levels at 24 and 52 weeks (efficacy end points), hypoglycemia, or severe hypoglycemia. Data from phase III studies in patients with T1D and T2D as part of the development of SAR342434, an insulin lispro biosimilar, show no differences in doses, HbA1c, fasting plasma glucose, self‐monitored blood glucose, hypoglycemia, and weight change. , The development of insulin ADA needs to be put in perspective given that 51.4% and 22.7% of patients with T1D and T2D, respectively, were ADA‐positive prior to the study. So far, the clinical experience with immunogenicity to insulin products has not raised concerns with respect to the products' safety and effectiveness. Considerations for evaluating immunogenicity during development of biosimilar insulin products can be found in the 2019 FDA guidance. CCS conducted for biosimilar products to NovoLog (insulin aspart) and Lantus (insulin glargine) have shown similar results (i.e., an absence of difference across all clinical outcomes between the biosimilar and the RP). , , In a 2018 survey, 300 managed care and specialty pharmacy professionals were asked to identify barriers to biosimilar adoption and rate their identified barriers based on how difficult they thought these barriers would be to overcome. Many of the barriers identified as difficult to overcome could be addressed by education, including prescriber and patient concerns about biosimilar safety and efficacy. Respondents were also asked to rate strategies to overcome barriers to biosimilar adoption, and the highest rated strategies were for prescriber education about evidence from switching studies (91% of responses) and the FDA guidance on pharmacy‐level substitution of RP with biosimilars (90%). These survey results suggest that education efforts undertaken by the FDA and clinical societies are impactful and should be continued. Communications from clinical societies and integration of biosimilars into clinical practice guidelines will be key to a broader adoption of biosimilars. The American Society of Clinical Oncology (ASCO) issued a statement on biosimilars as early as 2013 (most recently updated in March 2022 ) covering the safety and efficacy of biosimilars, regulatory considerations, as well as the value of biosimilars, and resources for prescriber and patient education. , As an example, the ASCO has already integrated biosimilars into their supportive care guidelines, which may have been one of the drivers of a fairly high level of adoption of supportive care biosimilars. Integration of biosimilars into cancer treatment guidelines is underway and will contribute to enhancing biosimilar use for curative purposes. Similar efforts have been undertaken by rheumatology societies. In 2018, the ACR released a white paper on biosimilars, followed by an updated position statement encouraging clinicians to incorporate biosimilars in their practice. , SHARE approach to clinical management Provider‐driven patient education can also help overcome barriers to biosimilar adoption across all medical specialties. In the diabetes space, ensuring that patients quickly become proficient in the use of the injection device associated with a particular biosimilar product will be critical to the adoption of insulin biosimilars. Similarly, ensuring that patients with diabetes better understand the rigor of the FDA regulatory requirements for approval of biosimilar and interchangeable insulin products may encourage patients to favor FDA‐approved biosimilar products over low‐cost products, such as those offered by illegitimate internet pharmacies that place patients at risk of poor‐quality medications that can result in AEs and poor diabetes control. The link between the type of information conveyed to the patient and nocebo effects is well‐documented and has led to the development of prevention strategies that involve screening of patients at risk for nocebo effects and appropriate patient‐clinician communication to prevent unnecessary negative expectations. Such communication could occur during the informed consent process, through providing procedural information, and at follow‐up assessments. Shared decision making between provider and patient can play a critical role in the successful integration of biosimilars into the management of patients with IBD. The Agency for Healthcare Research and Quality has developed a five‐step process for shared decision making that includes exploring and comparing the benefits, harms, and risks of each option through meaningful dialogue about what matters most to the patient. Clinicians across specialties are interested in real‐world data about the performance of biosimilars compared with innovator biologics. The accumulation of data from controlled clinical studies and real‐world clinical practice will address concerns over indication of extrapolation and switching and alternating from an innovator to a biosimilar. The availability of these data relies on the availability of registries. In oncology, the Dutch Cancer registry, a nationwide, population‐based registry, has yielded reassuring results regarding the performance of cancer care biosimilars. Being one of the first to adopt European Medicines Agency (EMA)‐approved rituximab biosimilars (R‐biosimilars) for the treatment of diffuse large B‐cell lymphoma (DLBCL), the Dutch government sought to compare OS in patients with DLBCL receiving R‐biosimilars vs. the originator (R‐originator). Data from thousands of patients with DLBCL treated with at least one cycle of rituximab products between 2014 and 2018 were analyzed using 3‐year OS as the primary end point. The study concluded that the 3‐year OS did not differ between patients with DLBCL treated with R‐biosimilars and those treated with the R‐originator. By the end of 2018, 91% of rituximab purchased in the Netherlands were biosimilars, accounting for a 43% reduction in annual costs for the Dutch government. In rheumatology and gastroenterology, the real‐world clinical data accrued so far in both treatment‐naïve and switched patients, do not substantiate the concerns over switching from an innovator biologic to a biosimilar in the management of immune‐mediated inflammatory diseases. The NOR‐SWITCH study is an often‐cited switching study. It is a 52‐week randomized noninferiority phase IV trial conducted in Norway in patients with IBD, and in patients with immune‐mediated diseases in rheumatology, namely AS, RA, psoriatic arthritis, and psoriasis, treated with infliximab. The study compared the clinical outcomes of patients switched once from the innovator to a biosimilar infliximab (CT‐P13/ infliximab‐dyyb) and patients who remained on the innovator. The primary end point for all patient populations was disease worsening during the 52‐week follow‐up. In the aggregated study population that included all 6 conditions of use, disease worsening occurred in 53 (26%) patients in the innovator group and 61 (30%) patients in the biosimilar group, which supports a conclusion of biosimilar noninferiority according to the prespecified noninferiority margin of 15%. The frequency of AEs was similar between the two groups as well. With regards to the use of biosimilars in rheumatology, the conclusions from the NOR‐SWITCH study have been confirmed by real‐world data collected in the Danish DANBIO registry, a nationwide quality registry that collected data from a mandated non‐medical switch, where cohorts of patients with RA, psoriatic arthritis, and AS underwent a single switch from the infliximab originator to an infliximab biosimilar (infliximab‐dyyb). Clinical outcomes were compared 3 months before and after the switch from the innovator to the biosimilar. In all three cohorts, similar disease activity was observed pre‐ and post‐switch for all clinical measures. With regard to IBD management, the results of NOR‐SWITCH are supported by a multicenter phase III study conducted in patients with moderate to severe active bio‐naïve Crohn's disease over 54 weeks. Patients were randomized 1:1 to the innovator or CT‐P13 until week 30, then each treatment group was randomized 1:1 to either remaining on the same treatment or switching, resulting in 4 maintenance regimens (innovator to innovator, innovator to CT‐P13, CT‐P13 to CT‐P13, and CT‐P13 to innovator) until week 54. The proportion of patients with a decrease of 70 points or more in the Crohn's Disease Activity Index (CDAI) from baseline was the clinical end point. Overall, the efficacy and safety study results did not suggest differences of clinical outcomes in patients treated with the biosimilar vs. the innovator, or between groups that underwent a single switch at week 30 vs. those that remained on the same treatment throughout 54 weeks. The accumulation of data on insulin biosimilar and interchangeable products can also contribute to mitigate concerns regarding these products. A meta‐analysis of 14 randomized controlled trials (RCTs) with 6,188 patients was conducted to ascertain the efficacy, safety, and immunogenicity of biosimilar insulin products compared with their innovator counterparts. This study showed that there was no difference between the biosimilar products and the innovators in HbA1c levels at 24 and 52 weeks (efficacy end points), hypoglycemia, or severe hypoglycemia. Data from phase III studies in patients with T1D and T2D as part of the development of SAR342434, an insulin lispro biosimilar, show no differences in doses, HbA1c, fasting plasma glucose, self‐monitored blood glucose, hypoglycemia, and weight change. , The development of insulin ADA needs to be put in perspective given that 51.4% and 22.7% of patients with T1D and T2D, respectively, were ADA‐positive prior to the study. So far, the clinical experience with immunogenicity to insulin products has not raised concerns with respect to the products' safety and effectiveness. Considerations for evaluating immunogenicity during development of biosimilar insulin products can be found in the 2019 FDA guidance. CCS conducted for biosimilar products to NovoLog (insulin aspart) and Lantus (insulin glargine) have shown similar results (i.e., an absence of difference across all clinical outcomes between the biosimilar and the RP). , , The potential cost savings on biological products from the introduction of biosimilars is enormous. Even more than lower biosimilar prices, it is downward pressure on innovator prices that is expected to drive cost savings on biological products. The uptake of biosimilars across therapeutic areas to date has been slow and gradual, mostly driven by payers rather than clinicians and patients. Increased biosimilar uptake would result in even larger cost savings. In the diabetes space, the major concern of a treating physician is not only related to the efficacy or safety of insulin biosimilars, but also to the patient's response to the device used to inject the new product. Providers need to ensure that patients are properly trained to use the device associated with a change in product (be it a change to a biosimilar insulin or another insulin product), even if that device is the same type of device (e.g., a pen). Outside of diabetes care, non‐medical switching, especially when applied to a patient stabilized on a particular agent, is a major concern for clinicians and patients. Clinicians want to make decisions regarding a patient's treatment themselves. Another concern with regard to switching is the hypothetical potential for increased immunogenicity. Physicians are expecting to see data that mirror the real‐world use of biosimilars, which involves multiple switches over multiple years. Doubts about the efficacy of biosimilars related to the relative paucity of pre‐market clinical data appear to still influence prescribers' decisions. In oncology, these doubts about efficacy explain why clinicians are still hesitant to prescribe biosimilars with curative intent (e.g., mAb for the treatment of early‐stage malignancies), whereas the uptake of biosimilars for supportive care has been rapid. The absence of the “hassle factor,” which captures the lack of incentive for a provider to prescribe a biosimilar due to relatively modest direct cost benefits for an insured patient, is a structural issue that can only be addressed by manufacturers, payers, and/or government. All other major barriers to biosimilar uptake identified in this workshop can be addressed by a multi‐tier education framework targeting the healthcare providers directly in contact with the patients, and the patients themselves. Clinical pharmacology, as a discipline, has a role to play in educating healthcare providers about the scientific evidence that supports the approval of biosimilars (e.g., PK‐PD approaches) and the concept of indication extrapolation, which is one of the bases for the cost savings associated with the development of biosimilars. Product‐specific information, rather than education on biosimilar products in general, needs to be provided. Patients also need to be adequately educated on biosimilars prior to initiation of a biosimilar (and, in the case of patients with diabetes self‐administering insulin, on the proper use of the device), as well as screened and adequately managed for the nocebo effect. Treating physicians and nurses have a major role to play in patients' education, as well as pharmacists, as more interchangeable products become available. The FDA, clinical societies, and patient associations can all contribute to the development of these educational tools. Significant efforts have been made by clinical societies to educate practitioners, where biosimilars in general have been the focus of white papers and dedicated sessions at national meetings. Educational efforts now need to be undertaken at the grass‐roots level and target patients and advanced practice providers (nurse practitioners and physician assistants), few of whom prescribe biosimilars. These efforts should aim to educate patients and care providers, with an emphasis on the high regulatory standards that biosimilars need to meet for approval, as well as the benefits of biosimilars to healthcare systems. This publication was supported by the Food and Drug Administration as part of a financial assistance award U01FD005946 totaling $5,000 with 100 percent funded by the FDA/Health and Human Services (HHS). G.H.L. was the principal investigator on research grant to institution from Amgen has consulted for Sandoz; G1 Therapeutics; Partners Healthcare; BeyondSpring; ER Squibb; MSD; Jazz Pharm; TEVA; Fresenius Kabi; and Samsung all outside the submitted work. R.K.C. has received fees for consulting and participation in advisory boards for AbbVie, Janssen, Pfizer, and Samsung Bioepis and has a research grant with Janssen. All other authors declared no competing interests for this work. DISCLAIMER The opinions expressed in this manuscript are those of the authors and should not be interpreted as the position of the US Food and Drug Administration. This manuscript is entirely based on the presentations and panel discussion from the April 13, 2022 M‐CERSI/US FDA public workshop. The presentation slides and recording of the event can be found at https://cersi.umd.edu/biosimilars‐decade‐experience‐and‐future‐directions .
|
Testis-Specific Genes Deregulation in the Testis of COVID-19 Patients: A Potential Driver of Spermatogenesis Disruption?
|
2cd61624-c513-4337-8d2a-d424a8db80f4
|
10099156
|
Forensic Medicine[mh]
|
The authors have nothing to disclose.
|
Environmental micro‐niche filtering shapes bacterial pioneer communities during primary colonization of a Himalayas' glacier forefield
|
49cc7696-5697-490c-affe-99db8a3baba0
|
10099744
|
Microbiology[mh]
|
The Himalayas has been defined as the ‘third pole’ because of a relatively large land surface coverage by ice glaciers and the harsh climatic conditions above 5000 m altitude. Such conditions, which include large diurnal temperature fluctuations, strong winds and high UV radiations, make the Himalayas one of the most extreme terrestrial environments on Earth (Yao et al., ). Unlike other similar ecosystems, such as the hyper‐arid Atacama Desert (Wierzchos et al., ) and the cold Antarctic desert (Orellana et al., ), the Himalayas region has a less extensive microbial ecology investigation (Dhakar & Pandey, ; Ezzat et al., ), because of the geographical remoteness and roughness that historically hampered its exploration (Lami et al., ; Matthews et al., ). As other mountain regions worldwide, also the Himalayas ecosystems are threatened by climate change (Hamid et al., ; Tak & Keshari, ; Zhao et al., ) that determines a progressive melting of glaciers and recession of their fronts (Adler et al., ; Maurer et al., ; Shean et al., ). Thus, scientific investigations focused on the changes in the hydrogeology of rivers, high‐altitude basins, ice melting, precipitations (Jury et al., ; Salerno et al., ), and their consequences on the vulnerability of local communities and ecosystems (Ashton & Zhu, ; Heath et al., ; Xu et al., ) are strongly demanded. As an effect of glacier melting, a bare mineral substrate is exposed during ice retreat, increasing the size of ice‐free areas in the foreland (Diolaiuti et al., ; Nuth et al., ). During the time out of the ice cover, this oligotrophic substrate, poor in nutrients and with a small amount of organic carbon, nitrogen and other nutrients, undergoes physical, chemical and microbiological modifications that favour primary colonization and the associated biogeochemical transformations (Garrido‐Benavent et al., ) that pave the later stages of plants biocenosis establishment (Borin et al., ; Mapelli et al., , ). In this context, glacier forefields provide unique opportunities to study the early stages of soil pedogenesis and biotic colonization, a poorly investigated aspect in high‐elevation ecosystems like the Third Pole (Sherpa et al., ). In the early succession stages, pioneer microorganisms, including cyanobacteria, chemoheterotrophic and diazotrophic bacteria, fungi, algae, lichens and bryophytes, are the main players in colonizing the moraine barren debris (Warren et al., ; Wietrzyk‐Pełka et al., ) and glacier‐fed streams (Busi et al., ; Ezzat et al., ; Fodelianakis et al., ; Kohler et al., ). These pioneer microorganisms can form complex structures named biological soil crusts (BSCs) (Perera et al., ), contributing to key ecosystem services, such as biogeochemical cycles, water retention and stabilization/consolidation of the proto‐soil against erosion (Barger et al., ; Ghiloufi et al., ; Maestre et al., ; Mapelli et al., ; Rippin et al., ; Weber et al., ). Among prokaryotes forming the BSCs, the aerobic photoautotrophs cyanobacteria are the most studied (Pessi et al., ): they guarantee primary productivity, providing nitrogen and carbon to the substrates for the proliferation of the other BSC components, such as chemolithoautotrophic microorganisms—equally important in the primary succession processes. For instance, in the high Arctic moraine of the Midtre Lovénbreen glacier, the rock weathering mediated by the chemolithoautotrophic bacterium Acidithiobacillus ferrooxidans increased water retention, favouring the formation of a soil fertility island, which fostered plant establishment and growth (Borin et al., ; Mapelli et al., ). The physical and chemical composition of the proto‐soil matrix, together with the climatic factors, affect the dynamics, structure and functionality of the BSCs‐associated communities in a complex interplay between the biotic and abiotic components of the ecosystem (Bourquin et al., ; Mallen‐Cooper et al., ; Schulz et al., ). The stages of succession can last several years in cold and high‐elevation environments (Schmidt et al., ) and BSCs from different points of moraine chronosequences differ remarkably in their development and microbial diversity (Mapelli et al., ). However, moraines are not topographically uniform and the terrain layout in each point of a chronosequence can be rather variable in terms of slope, spatial orientation, exposure to light and climatic factors (wind, precipitations, etc.), thus imposing variable selective pressures that determine different communities. In the present study, we selected the foreland of the Lobuche glacier (5050 m above sea level [asl]) in the Khumbu Valley, located in the eastern area of the Sagarmatha National Park in Nepal, as a model site to investigate the involvement of BSCs in the process of primary colonization of the mineral substrate released in the glacier moraine. We test the hypotheses that: (i) bacterial communities associated with the BSCs differ from those of the underlying deep layers (DLs; i.e., mineral substrate between BSC and permafrost) and (ii) irrespective of the substrate types (BSC and DL), bacterial communities are specific for each site because of the niche‐specific microenvironmental conditions. The rationale behind the hypothesis is that the variable topography, microenvironment and substrate properties across the irregular spots in glacier moraines weigh in the shaping of microbial diversity in the rocky DLs and the overlying BSCs. A study area consisting of a 50 m 2 stone pit has been identified in the ablation tongue of the glacier and it is featured by a mosaic of hills and depressions (Jones et al., ). Six sites presenting greyish/dark BSCs as signs of primary colonization were selected within such area (Figure ). In each of the sites, BSCs have developed under different microenvironmental conditions. The close spatial proximity of these sites within the studied area abolishes the effect of time associated with points along chronosequences, thus allowing us to examine the effect of environmental variability. The BSCs and the corresponding belowground DLs were collected from each site to unveil the interactions among the micro‐climatic and physico‐chemical conditions and the microbial components in structuring the pioneer colonization of the barren substrate and driving the process of soil formation in the high‐elevation environment of the Himalayas. We combined membrane lipid analysis and high‐throughput sequencing of the bacterial 16S rRNA gene to unveil the community structure of the microbiome inhabiting the BSCs and the DLs. Through this approach, we have identified environmental factors driving a site‐dependent bacterial assembly of BSCs and DLs that add the microbiome microvariability perspective to the current knowledge on pedogenesis at the Third Pole.
Study sites and sample collection In October 2009, an area showing evidence of primary colonization was identified in the forefield of the Lobuche glacier, Nepal, at 5050 m asl (27°57′24.92″ N, 86°48′35.81″ E; Figure ). The region is classified as a cold desert, experiencing long winter periods with temperatures below 0°C, while summer periods are short and arid, although snowmelt and occasionally rainfall pulses locally increase moisture content (Derin et al., ; Salerno et al., ). The study area was in a stone pit in the glacier forefield, located in the relic debris‐mantled ablation zone (~1 km in length) that is disconnected from the ice accumulation zone of the glacier. The topography of this ablation tongue is featured by a mosaic of hills and depressions (Jones et al., ). Samples were collected from six sites (named 1–6) located in a pit depression (from 1.14 to 3.30 m depth compared to the moraine level; schematic representation in Figure ), covering an overall area of 50 m 2 . The sites were selected based on the presence of greyish/dark biological patinas (thereafter defined as BSCs) positioned in different topographical conditions (hills/depressions) present in the moraine (Figure ). The BSCs were poorly developed (thicknesses up to 5 mm; Table ) without any presence of lichens, mosses and vascular plants (Figure ). For each of the six sites, BSCs and the relative below mineral substrates (hereafter defined as DL) were collected in triplicate ∼5 cm apart from each other and stored at −20°C for subsequent analysis. Samples for physico‐chemical characterization were air‐dried for 24 h and stored at room temperature until the analysis in the laboratory. Samples of BSC and DL were collected from each site and immediately soaked in methanol for further intact polar lipid (IPL) analysis. Measurement of environmental parameters in situ Environmental parameters such as minimal and maximal air temperature (°C), air humidity (%), percentage of moisture in BSC (% relative humidity [RH]; hygrometer Sama Tolls, Italy), total solar radiation (PYR) and photosynthetically active radiation (PAR; PAR SENSOR QSO‐S PAR Photon Flux, METER Group, Inc., USA) were recorded to assess the micro‐environmental influence on bacterial community structure in each site. Physico‐chemical and geological analysis DLs were sieved after drying into fine (<2 mm) and coarse (>2 mm) particles. The pH of DLs was determined in an aqueous solution using a 1:2.5 soil/water ratio; total nitrogen by the Kjeldahl method (total Kjeldahl nitrogen [TKN]); available P using the Olsen method; and organic carbon by wet oxidation. For cation exchange capacity (CEC) and exchangeable Ca, Mg, K, Al, Fe and Na determinations, samples were saturated with BaCl 2 ‐triethanolamine solution (pH 8.1) and exchangeable cations were determined by inductively coupled plasma (ICP‐MAS VARIAN, Liberty AX, Walnut Creek, CA, USA). All analytical data were determined on the fine particle fraction (<2 mm); all the above methods were described in Mapelli et al., . Water holding capacity was determined by the Stackman box method (Mapelli et al., ). In the case of BSC, samples were sieved after drying; pH, nitrogen and carbon content were determined as indicated above. Dissolved organic matter (DOM) was extracted from transect crust with deionized water, using 5 g of equivalent dry material (1:2 solid:liquid ratio, w/w) in a Dubnoff bath at 60 rpm for 30 min at room temperature. The obtained suspension was centrifuged for 15 min at 6500 rpm and then vacuum filtered twice: firstly by using a fast glass fibre filter (Whatman GF 6) and then by using a cellulose acetate membrane filter (Whatman OE 67) of 0.45 μm, then its organic carbon content (dissolved organic carbon [DOC]) was determined as reported before (Mapelli et al., ). BSCs antioxidant activity was assessed using the DPPH radical scavenging method (Gulcin, ). Briefly, an aliquot of DPPH (125 μM) (Prot. N. D9132, Sigma Aldrich, Dermstadt, Germany) solution in methanol was added to DOM. The decrease in absorbance at 517 nm was recorded by a spectrophotometer (Cary 60 UV–Vis, Agilent Technologies, Santa Clara, CA, USA). The results were expressed as IC 50 , that is, the DOC concentration that scavenges 50% of DPPH determined after 30 min of reaction. All the analyses were performed in triplicate. The biological material of BSC at Site 6 was not enough to perform physico‐chemical analyses. The physico‐chemical tables containing the data from BSCs and DLs of the sites were squared transformed and used to create a resemblance matrix using the Euclidean distance in PRIMER (Anderson et al., ). Canonical analysis of principal components (CAP) was used to visualize site variation. Significant differences in physico‐chemical composition were investigated by permutational analysis of variance (PERMANOVA) in Primer (Anderson et al., ), considering the factor ‘Site’ as a fixed and orthogonal factor (5 and 6 levels in BSCs and DLs, respectively). PERMANOVA pair‐wise tests were also conducted to evaluate the effect of ‘Site’ in both sample categories, BSC and DLs. The contribution of the variables to the physico‐chemical differences among sites was assessed by the analysis of similarity percentages (SIMPER) in PRIMER (Anderson et al., ). X‐ray powder diffraction X‐ray powder diffraction (XRD) analyses were performed after micro‐sampling proto‐soil crusts. XRD measurements were carried out using an X'Pert Panalytical Diffractometer working in Bragg–Brentano geometry and equipped with an X'Celeretor Detector. Each sample was manually ground in an agate mortar and then analysed in the 3–80° 2 range (Cu‐wavelength, 40 kV, 40 mA) with a step size of 0.02° and a counting time of 20 s. Qualitative mineralogical phase analysis was performed based on peak position (Patterson, ), and then indications about their quantities were obtained based on the peak intensities. Scanning electron microscope analysis Ultramicroscopic analyses employed a Cambridge 360 scanning electron microscope (SEM), imaging both secondary and back‐scattered electrons. Some elemental analyses were performed to identify mineral constituents, with an energy dispersive x‐ray analysis (EDS Link Isis 300) requiring carbon‐coated samples: energy dispersive x‐ray spectroscopy with an accelerating voltage of 20 kV, filament intensity 1.70 A and probe intensity of 280 pA. Intact polar lipid analysis To analyse lipid biomarkers approximately, 2.5–8 g (dry weight) of soil material was collected and transferred with methanol (MeOH)‐cleaned spatulas into combusted glass vials and sealed with MeOH‐cleaned Teflon‐coated screw caps. The samples were immediately inundated with MeOH to prevent further microbial activity and stored at ambient temperatures in the dark. In the laboratory, 2 μg phospholipid standard di‐C21‐PC (Avanti Polar Lipids, USA) was added to all samples and lipids were extracted using a modified Bligh and Dyer procedure (Sturt et al., ). For the first two extraction steps, MeOH:dichloromethane (DCM):phosphate buffer (2:1:0.8, v/v) was used, followed by a third step using MeOH:DCM (1:3, v/v). For IPL analysis, an aliquot of the total lipid extract was measured in positive ion mode on a Bruker maXis Plus ultra‐high‐resolution quadrupole time‐of‐flight mass spectrometer coupled with an electrospray ionization source to a Dionex Ultimate 3000RS ultra‐high‐pressure liquid chromatograph. A Waters Acquity UHPLC BEH Amide column was used for HILIC (hydrophilic interaction liquid chromatography) separation of IPLs following the protocol described in Wörmer et al., . For quantification, peak areas of individual IPLs were normalized to the internal di‐C21‐PC standard and response factors were corrected using commercially available standards as described in Schubotz et al., . A second aliquot was separated into a sterol‐containing alcohol fraction and free fatty acid‐containing acid fraction using a solid‐phase extraction (Hinrichs et al., ) after derivatization with BSTFA and BF3, to form respective TMS (trimethylsilyl)‐derivatives and FAMEs (fatty acid methyl esters), the alcohol and fatty acid fractions were analysed on a ThermoFinnigan Trace GC coupled to a Finnigan DSQ for identification and a Finnigan FID for quantification. The quantitative IPLs data from BSCs and DLs were log‐transformed to avoid overdispersion before creating a resemblance matrix using the Euclidean distance in PRIMER (Anderson et al., ). CAP was used to evaluate differences in IPLs composition (Anderson et al., ), considering the factor ‘Type’ as a fixed factor (two levels: BSCs and DLs). To explore and visualize the distribution of samples in the ordination space, the principal coordinates analysis (PCoA) was built in PRIMER. DNA extraction and sequencing Total DNA was extracted from 0.5 ± 0.1 g of dry BSC and DL samples with the PowerSoil DNA Isolation Kit (MoBio Inc., CA, USA), following the manufacturer's instructions. DNA quality was evaluated on 0.8% agarose gel, quantified using a NanoDrop Microvolume Spectrophotometer (Thermo Fisher Scientific) and stored at −20°C until further processing. High‐throughput Illumina sequencing was performed on the V3–V4 hypervariable regions of the 16S rRNA gene fragments by PCR amplification using 341F and 785R primers at Macrogen Inc., South Korea. Raw sequences were analysed with QIIME pipeline, including quality filtering, trimming, dereplication of the sequences and creation of operational taxonomic units (OTUs at 97% of similarity), as previously described (Booth et al., ). Representative sequences of each OTUs were aligned with the database in QIIME using uclust (Caporaso et al., ) and blast commands to search against the SILVA version 138 (Quast et al., ). After removing singletons, sequences non‐assigned to bacteria (i.e., plastids, archaea and unassigned), and sequences with relative abundance <0.001% in the entire dataset, a total of 3,477,016 reads with an average length of 300 bp were obtained (range of reads per sample: 37,725–133,904; Table ). Rarefaction curves for the BSC and DL samples were reported in Figure . Sequences were deposited to the Sequence Read Archive of NCBI under the BioProject PRJNA698068. The bacterial OTU table was used to calculate alpha‐diversity indices (Shannon diversity and observed richness) in R using the Phyloseq package. Occupancy‐abundance curves were generated by calculating the number of samples in which a certain OTU was detected and its total relative abundance. The OTUs shared among different sites in BSCs and DLs were identified by a Venn‐diagram analysis in R. OTUs were tested for their differential abundance (enrichment and depletion; i.e., log 2 ‐fold change) between BSC and DL using the DEseq2 package in R (Love et al., ). Considering BSC and DL separately, we computed the differential abundance of OTUs across sites and plotted the related taxonomic data through a differential heat tree using the package Metacoder (Foster et al., ). Linear discriminant analysis effect size (LEfSe) was further used to determine the bacterial discriminants between sites (Segata et al., ). The most important OTUs that contribute to the dissimilarity between bacterial communities among micro‐environments in BSC and DL were defined by SIMPER in PRIMER (Anderson et al., ). Beta‐diversity of bacterial communities was analysed using the compositional Bray–Curtis (BC) similarity matrix of the relative log‐transformed OTUs‐table in PRIMER (Anderson et al., ). The BC‐similarity matrix was used to perform the CAP and PERMANOVA to statistically test the impact of the categorical explanatory variable ‘Type’ (two levels: BSC and DL), along with the variable ‘Site’ nested in ‘Type’ (6 levels for both BSC and DL: 1, 2, 3, 4, 5 and 6). PERMANOVA pair‐wise tests were also conducted to evaluate the effect of ‘Site’ for BSC and DL. We computed the beta‐diversity components by using the beta.div.comp function of the package adespatial (Dray et al., ). The OTU table was used to infer the mechanisms driving the bacterial communities assembly in BSCs and DLs by applying a phylogenetic bin‐based null model (iCAMP) version 1.2.9 with recommended default settings (Ning et al., ). Sloan neutral community model was applied to predict the relationship between the frequency of occurrence of taxa in a community, that is, BSC and DL, and assessed the potential importance of the neutral or stochastic process in shaping bacterial community (Sloan et al., ). This model evaluates whether the microbial assembly from a metacommunity follows a neutral model or a niche‐based process as a function of the metacommunity log abundance. The fitting of the model was performed in the R environment using non‐linear least‐squares fitting and the minpack.lm , Hmisc and stats4 packages (Chen et al., ). The 95% confidence interval around all fitting statistics was calculated by bootstrapping with 1000 replicates. The taxa were subsequently separated into three fractions depending on whether they occurred more frequently than the neutral model predictions (over‐predicted fraction), less frequently than the neutral model predictions (under‐predicted fraction), or within the 95% confidence interval of the neutral model predictions (neutrality interval). The proportion of variability R 2 quantifies the fit level of detection frequency to the model, Nm is an estimate of dispersal between communities in which N is the metacommunity size and m is the rate of the individuals immigrating from the source community into the local community. We also applied this model by considering a taxon's abundance in a source metacommunity (DL or BSC) and its occurrence frequency in the target metacommunity (BSC or DL) to test the source‐sink hypothesis (Burns et al., ). Finally, to quantitatively estimate the potential contribution of DL bacterial communities (here given as source environment) to those of BSC (here given as sink environment), we used the Bayesian‐based SourceTracker (Knights et al., ) available as an R package at http://sourcetracker.sf.net . The rate of decay of the bacterial community's similarity (BC) across the sites was evaluated in function of the site distance (m), and Euclidean distance matrices of physico‐chemical and micro‐climatic conditions of the sites and of the IPL diversity for both BSCs and DLs; linear regressions were computed in GraphPad Prism. Significant physico‐chemical variables explaining the bacterial communities structure (16S rRNA gene‐based) were assessed by using a distance‐based multivariate linear model (DistLM) in Primer (Anderson et al., ) and the overall best solutions (lowest corrected Akaike information criterion [AICc]) was indicated. The set of physico‐chemical variables measured in BSC and DL (explanatory variable) was used to assess the amount of variation of the bacterial community (multivariate response variables) by running the function Best.sq.r() from the mvabund package in R (Wang et al., ). The three most important physicochemical variables detected were further used to identify which bacterial members (relative abundance) were positively/negatively correlated with them in the two layers by running the Metacoder package in R (Foster et al., ). Co‐occurrence network analysis The bacterial phylotypes enriched in BSC and DL samples were used to build two correlation networks by calculating all pairwise Spearman correlation coefficients among these bacterial taxa in CoNet (Faust & Raes, ). We kept both negative and positive correlations with Spearman's correlation coefficient ρ > 0.5 and p < 0.01 to provide information on microbial taxa that may respond robustly to the environmental conditions of BSC and DL. The co‐occurrence network was visualized with Gephi (Bastian et al., ) and default parameters were used to identify their topological indices.
In October 2009, an area showing evidence of primary colonization was identified in the forefield of the Lobuche glacier, Nepal, at 5050 m asl (27°57′24.92″ N, 86°48′35.81″ E; Figure ). The region is classified as a cold desert, experiencing long winter periods with temperatures below 0°C, while summer periods are short and arid, although snowmelt and occasionally rainfall pulses locally increase moisture content (Derin et al., ; Salerno et al., ). The study area was in a stone pit in the glacier forefield, located in the relic debris‐mantled ablation zone (~1 km in length) that is disconnected from the ice accumulation zone of the glacier. The topography of this ablation tongue is featured by a mosaic of hills and depressions (Jones et al., ). Samples were collected from six sites (named 1–6) located in a pit depression (from 1.14 to 3.30 m depth compared to the moraine level; schematic representation in Figure ), covering an overall area of 50 m 2 . The sites were selected based on the presence of greyish/dark biological patinas (thereafter defined as BSCs) positioned in different topographical conditions (hills/depressions) present in the moraine (Figure ). The BSCs were poorly developed (thicknesses up to 5 mm; Table ) without any presence of lichens, mosses and vascular plants (Figure ). For each of the six sites, BSCs and the relative below mineral substrates (hereafter defined as DL) were collected in triplicate ∼5 cm apart from each other and stored at −20°C for subsequent analysis. Samples for physico‐chemical characterization were air‐dried for 24 h and stored at room temperature until the analysis in the laboratory. Samples of BSC and DL were collected from each site and immediately soaked in methanol for further intact polar lipid (IPL) analysis.
in situ Environmental parameters such as minimal and maximal air temperature (°C), air humidity (%), percentage of moisture in BSC (% relative humidity [RH]; hygrometer Sama Tolls, Italy), total solar radiation (PYR) and photosynthetically active radiation (PAR; PAR SENSOR QSO‐S PAR Photon Flux, METER Group, Inc., USA) were recorded to assess the micro‐environmental influence on bacterial community structure in each site.
DLs were sieved after drying into fine (<2 mm) and coarse (>2 mm) particles. The pH of DLs was determined in an aqueous solution using a 1:2.5 soil/water ratio; total nitrogen by the Kjeldahl method (total Kjeldahl nitrogen [TKN]); available P using the Olsen method; and organic carbon by wet oxidation. For cation exchange capacity (CEC) and exchangeable Ca, Mg, K, Al, Fe and Na determinations, samples were saturated with BaCl 2 ‐triethanolamine solution (pH 8.1) and exchangeable cations were determined by inductively coupled plasma (ICP‐MAS VARIAN, Liberty AX, Walnut Creek, CA, USA). All analytical data were determined on the fine particle fraction (<2 mm); all the above methods were described in Mapelli et al., . Water holding capacity was determined by the Stackman box method (Mapelli et al., ). In the case of BSC, samples were sieved after drying; pH, nitrogen and carbon content were determined as indicated above. Dissolved organic matter (DOM) was extracted from transect crust with deionized water, using 5 g of equivalent dry material (1:2 solid:liquid ratio, w/w) in a Dubnoff bath at 60 rpm for 30 min at room temperature. The obtained suspension was centrifuged for 15 min at 6500 rpm and then vacuum filtered twice: firstly by using a fast glass fibre filter (Whatman GF 6) and then by using a cellulose acetate membrane filter (Whatman OE 67) of 0.45 μm, then its organic carbon content (dissolved organic carbon [DOC]) was determined as reported before (Mapelli et al., ). BSCs antioxidant activity was assessed using the DPPH radical scavenging method (Gulcin, ). Briefly, an aliquot of DPPH (125 μM) (Prot. N. D9132, Sigma Aldrich, Dermstadt, Germany) solution in methanol was added to DOM. The decrease in absorbance at 517 nm was recorded by a spectrophotometer (Cary 60 UV–Vis, Agilent Technologies, Santa Clara, CA, USA). The results were expressed as IC 50 , that is, the DOC concentration that scavenges 50% of DPPH determined after 30 min of reaction. All the analyses were performed in triplicate. The biological material of BSC at Site 6 was not enough to perform physico‐chemical analyses. The physico‐chemical tables containing the data from BSCs and DLs of the sites were squared transformed and used to create a resemblance matrix using the Euclidean distance in PRIMER (Anderson et al., ). Canonical analysis of principal components (CAP) was used to visualize site variation. Significant differences in physico‐chemical composition were investigated by permutational analysis of variance (PERMANOVA) in Primer (Anderson et al., ), considering the factor ‘Site’ as a fixed and orthogonal factor (5 and 6 levels in BSCs and DLs, respectively). PERMANOVA pair‐wise tests were also conducted to evaluate the effect of ‘Site’ in both sample categories, BSC and DLs. The contribution of the variables to the physico‐chemical differences among sites was assessed by the analysis of similarity percentages (SIMPER) in PRIMER (Anderson et al., ).
X‐ray powder diffraction (XRD) analyses were performed after micro‐sampling proto‐soil crusts. XRD measurements were carried out using an X'Pert Panalytical Diffractometer working in Bragg–Brentano geometry and equipped with an X'Celeretor Detector. Each sample was manually ground in an agate mortar and then analysed in the 3–80° 2 range (Cu‐wavelength, 40 kV, 40 mA) with a step size of 0.02° and a counting time of 20 s. Qualitative mineralogical phase analysis was performed based on peak position (Patterson, ), and then indications about their quantities were obtained based on the peak intensities.
Ultramicroscopic analyses employed a Cambridge 360 scanning electron microscope (SEM), imaging both secondary and back‐scattered electrons. Some elemental analyses were performed to identify mineral constituents, with an energy dispersive x‐ray analysis (EDS Link Isis 300) requiring carbon‐coated samples: energy dispersive x‐ray spectroscopy with an accelerating voltage of 20 kV, filament intensity 1.70 A and probe intensity of 280 pA.
To analyse lipid biomarkers approximately, 2.5–8 g (dry weight) of soil material was collected and transferred with methanol (MeOH)‐cleaned spatulas into combusted glass vials and sealed with MeOH‐cleaned Teflon‐coated screw caps. The samples were immediately inundated with MeOH to prevent further microbial activity and stored at ambient temperatures in the dark. In the laboratory, 2 μg phospholipid standard di‐C21‐PC (Avanti Polar Lipids, USA) was added to all samples and lipids were extracted using a modified Bligh and Dyer procedure (Sturt et al., ). For the first two extraction steps, MeOH:dichloromethane (DCM):phosphate buffer (2:1:0.8, v/v) was used, followed by a third step using MeOH:DCM (1:3, v/v). For IPL analysis, an aliquot of the total lipid extract was measured in positive ion mode on a Bruker maXis Plus ultra‐high‐resolution quadrupole time‐of‐flight mass spectrometer coupled with an electrospray ionization source to a Dionex Ultimate 3000RS ultra‐high‐pressure liquid chromatograph. A Waters Acquity UHPLC BEH Amide column was used for HILIC (hydrophilic interaction liquid chromatography) separation of IPLs following the protocol described in Wörmer et al., . For quantification, peak areas of individual IPLs were normalized to the internal di‐C21‐PC standard and response factors were corrected using commercially available standards as described in Schubotz et al., . A second aliquot was separated into a sterol‐containing alcohol fraction and free fatty acid‐containing acid fraction using a solid‐phase extraction (Hinrichs et al., ) after derivatization with BSTFA and BF3, to form respective TMS (trimethylsilyl)‐derivatives and FAMEs (fatty acid methyl esters), the alcohol and fatty acid fractions were analysed on a ThermoFinnigan Trace GC coupled to a Finnigan DSQ for identification and a Finnigan FID for quantification. The quantitative IPLs data from BSCs and DLs were log‐transformed to avoid overdispersion before creating a resemblance matrix using the Euclidean distance in PRIMER (Anderson et al., ). CAP was used to evaluate differences in IPLs composition (Anderson et al., ), considering the factor ‘Type’ as a fixed factor (two levels: BSCs and DLs). To explore and visualize the distribution of samples in the ordination space, the principal coordinates analysis (PCoA) was built in PRIMER.
extraction and sequencing Total DNA was extracted from 0.5 ± 0.1 g of dry BSC and DL samples with the PowerSoil DNA Isolation Kit (MoBio Inc., CA, USA), following the manufacturer's instructions. DNA quality was evaluated on 0.8% agarose gel, quantified using a NanoDrop Microvolume Spectrophotometer (Thermo Fisher Scientific) and stored at −20°C until further processing. High‐throughput Illumina sequencing was performed on the V3–V4 hypervariable regions of the 16S rRNA gene fragments by PCR amplification using 341F and 785R primers at Macrogen Inc., South Korea. Raw sequences were analysed with QIIME pipeline, including quality filtering, trimming, dereplication of the sequences and creation of operational taxonomic units (OTUs at 97% of similarity), as previously described (Booth et al., ). Representative sequences of each OTUs were aligned with the database in QIIME using uclust (Caporaso et al., ) and blast commands to search against the SILVA version 138 (Quast et al., ). After removing singletons, sequences non‐assigned to bacteria (i.e., plastids, archaea and unassigned), and sequences with relative abundance <0.001% in the entire dataset, a total of 3,477,016 reads with an average length of 300 bp were obtained (range of reads per sample: 37,725–133,904; Table ). Rarefaction curves for the BSC and DL samples were reported in Figure . Sequences were deposited to the Sequence Read Archive of NCBI under the BioProject PRJNA698068. The bacterial OTU table was used to calculate alpha‐diversity indices (Shannon diversity and observed richness) in R using the Phyloseq package. Occupancy‐abundance curves were generated by calculating the number of samples in which a certain OTU was detected and its total relative abundance. The OTUs shared among different sites in BSCs and DLs were identified by a Venn‐diagram analysis in R. OTUs were tested for their differential abundance (enrichment and depletion; i.e., log 2 ‐fold change) between BSC and DL using the DEseq2 package in R (Love et al., ). Considering BSC and DL separately, we computed the differential abundance of OTUs across sites and plotted the related taxonomic data through a differential heat tree using the package Metacoder (Foster et al., ). Linear discriminant analysis effect size (LEfSe) was further used to determine the bacterial discriminants between sites (Segata et al., ). The most important OTUs that contribute to the dissimilarity between bacterial communities among micro‐environments in BSC and DL were defined by SIMPER in PRIMER (Anderson et al., ). Beta‐diversity of bacterial communities was analysed using the compositional Bray–Curtis (BC) similarity matrix of the relative log‐transformed OTUs‐table in PRIMER (Anderson et al., ). The BC‐similarity matrix was used to perform the CAP and PERMANOVA to statistically test the impact of the categorical explanatory variable ‘Type’ (two levels: BSC and DL), along with the variable ‘Site’ nested in ‘Type’ (6 levels for both BSC and DL: 1, 2, 3, 4, 5 and 6). PERMANOVA pair‐wise tests were also conducted to evaluate the effect of ‘Site’ for BSC and DL. We computed the beta‐diversity components by using the beta.div.comp function of the package adespatial (Dray et al., ). The OTU table was used to infer the mechanisms driving the bacterial communities assembly in BSCs and DLs by applying a phylogenetic bin‐based null model (iCAMP) version 1.2.9 with recommended default settings (Ning et al., ). Sloan neutral community model was applied to predict the relationship between the frequency of occurrence of taxa in a community, that is, BSC and DL, and assessed the potential importance of the neutral or stochastic process in shaping bacterial community (Sloan et al., ). This model evaluates whether the microbial assembly from a metacommunity follows a neutral model or a niche‐based process as a function of the metacommunity log abundance. The fitting of the model was performed in the R environment using non‐linear least‐squares fitting and the minpack.lm , Hmisc and stats4 packages (Chen et al., ). The 95% confidence interval around all fitting statistics was calculated by bootstrapping with 1000 replicates. The taxa were subsequently separated into three fractions depending on whether they occurred more frequently than the neutral model predictions (over‐predicted fraction), less frequently than the neutral model predictions (under‐predicted fraction), or within the 95% confidence interval of the neutral model predictions (neutrality interval). The proportion of variability R 2 quantifies the fit level of detection frequency to the model, Nm is an estimate of dispersal between communities in which N is the metacommunity size and m is the rate of the individuals immigrating from the source community into the local community. We also applied this model by considering a taxon's abundance in a source metacommunity (DL or BSC) and its occurrence frequency in the target metacommunity (BSC or DL) to test the source‐sink hypothesis (Burns et al., ). Finally, to quantitatively estimate the potential contribution of DL bacterial communities (here given as source environment) to those of BSC (here given as sink environment), we used the Bayesian‐based SourceTracker (Knights et al., ) available as an R package at http://sourcetracker.sf.net . The rate of decay of the bacterial community's similarity (BC) across the sites was evaluated in function of the site distance (m), and Euclidean distance matrices of physico‐chemical and micro‐climatic conditions of the sites and of the IPL diversity for both BSCs and DLs; linear regressions were computed in GraphPad Prism. Significant physico‐chemical variables explaining the bacterial communities structure (16S rRNA gene‐based) were assessed by using a distance‐based multivariate linear model (DistLM) in Primer (Anderson et al., ) and the overall best solutions (lowest corrected Akaike information criterion [AICc]) was indicated. The set of physico‐chemical variables measured in BSC and DL (explanatory variable) was used to assess the amount of variation of the bacterial community (multivariate response variables) by running the function Best.sq.r() from the mvabund package in R (Wang et al., ). The three most important physicochemical variables detected were further used to identify which bacterial members (relative abundance) were positively/negatively correlated with them in the two layers by running the Metacoder package in R (Foster et al., ).
The bacterial phylotypes enriched in BSC and DL samples were used to build two correlation networks by calculating all pairwise Spearman correlation coefficients among these bacterial taxa in CoNet (Faust & Raes, ). We kept both negative and positive correlations with Spearman's correlation coefficient ρ > 0.5 and p < 0.01 to provide information on microbial taxa that may respond robustly to the environmental conditions of BSC and DL. The co‐occurrence network was visualized with Gephi (Bastian et al., ) and default parameters were used to identify their topological indices.
Environmental conditions of the studied area The Lobuche glacier moraine is constituted by a randomly distributed patchwork of rocks with variable topographical features that is the norm across the debris‐mantled ablation zone of the glacier. In an area of 50 m 2 , we selected six sites in which greyish/dark BSCs develop on top of the mineral substrates DL (Figures ). The thickness of BSCs ranged from 0.8 to 5 mm, while DLs varied depending on the level of the underlying permafrost and ranged from <1 to 60 cm (Table ). Evaluation of climatic parameters revealed that each site was characterized by unique micro‐environmental conditions in terms of time and intensity of exposition to light, temperature and RH (Figure ). Due to their position within the studied area (Figure ), the sites were differently exposed to the solar irradiation (Figures and ): sites 1, 3 and 5 received, on the day of sampling, about 8 h of sunlight per day (20,824 ± 425 PYR and 10,720 ± 291 PAR), while sites 2 and 6 received 6 h (15,181 ± 1204 PYR and 7360 ± 1154 PAR) and Site 4 received only 4 h (10,618 PYR and 5803 PAR) because partially shadowed by the surrounding rocks. Variable daily temperatures (25 ± 13°C) were also observed, reaching values over 30°C in sites 1, 3 and 5; while values below 20°C were recorded in sites 2, 4 and 6 (Figure ). All the sites had similar minimal temperatures of air (−7 ± 1°C) during the night. Air RH showed a specific pattern in which higher values (>50%) were observed in sites 3, 4 and 5, which were located more in‐depth compared to the average level of the moraine (Figure ), while sites 1, 2 and 6 showed values <50%; the minimum RH ranged between 13% and 27% across all sites (Figure ). Physico‐chemical characterization of BSCs and DLs Chemical microanalysis in conjunction with SEM (Figure ) revealed that BSCs presented on their surface mineral and organic constituents tightly connected. Calcium carbonate nodules were also observed in BSCs, along with nodules constituted by coalescent carbonate microtubules (Figures ). In addition, nuclear magnetic resonance (NMR) analysis revealed that the organic matter present in the BSCs was dominated by O‐CH 3 or N‐alkyl, O‐alkyl C and di‐O‐alkyl C (from 60.91% to 72.65%), and aliphatic C bonded to other aliphatic chain or H (from 11.44% to 19.55%) but with different relative abundance across the six sites (Table ). The physico‐chemical characteristics (Table ) significantly differed across the sites (PERMANOVA: F 4,10 = 95.01, p = 0.001; Figure ; pairwise comparison in Table ). Among the chemical variables measured, DOC, RH and carbon (C) explained most of the observed chemical–physical diversity of BSCs across sites (up to 94%, Table ). The DLs were consistently composed of thin grains of quartz, feldspar and mica (Table ), while carbon was not detectable (Table ). The DLs samples had high sand content (>90%) followed by a small proportion of loam (5%–6%) and clay (0.5%–5%), except for Site 5 where the latest represented up to 26% and 11% of the substrate material, respectively (Table ). Despite similarity in term of texture, DLs had different compositions across the six sites (Table ; PERMANOVA: F 5,12 = 31.71, p = 0.001; pairwise comparison in Table ). The most important variables that characterized such diversity among DLs in terms of physico‐chemical parameters were TKN and ion content (Fe, Na, TKN, Al and K; Table ). Diversity of IPLs across BSCs and DLs IPLs are major components of cells membrane and provide information about the living microbial community in the environmental samples (Schubotz et al., ). Based on their quantification, we found that the microbial biomass was mainly associated with BSCs, reaching up to 30‐fold higher levels than DLs (Figure ; average fold change, 8). The IPLs composition showed a separation between BSC and DL samples (CAP, choice of m = 2, trace statistic = 0.31, p = 0.034; PCoA in Figure ). Among IPLs, glycolipids (i.e., monoglycosyl diacylglycerod, diglycosyl diacylglycerol, triglycosyl diacylglycerol, sulfoquinovosyl diacylglycerol and glycoronic acid diacylglycerol) which are found in thylakoid membranes of all phototrophic organisms (Hölzl & Dörmann, ) were the most abundant lipid class: on average they represented 33% and 27% of total IPLs in BSCs and DLs, respectively (Figure ). Notably, the heterocyst glycolipids, a subgroup of glycolipids typical of nitrogen‐fixing filamentous cyanobacteria (Nichols & Wood, ), constituted 23% (range, 7%–42%) and 15% (6%–32%) of the total IPLs in BSCs and DLs, respectively. Betaine lipids, typical of lower plants and of some cyanobacteria (Klug & Benning, ; Řezanka et al., ), were abundant at all sites, representing up to 30% of total IPLs (Figure ). High relative levels of ornithine lipids, particularly trimethyl ornithine lipids that are exclusive bacterial markers (López‐Lara & Geiger, ; Moore, ), were observed in BSCs of sites 2, 3 and 6 (relative abundance range, 2%–3.8%). Even though eukaryotic biomass (e.g., mosses, lichens and bryophytes) was not visible in BSCs, the detection of sterols revealed its presence. The distribution of sterols (mainly ergosterol, stigmasterol and sitosterol) are comparable to what has been reported for Sphagnum species (Bryophyta) found in peat bogs (Baas et al., ) and were mainly detected in the BSCs of all sites, in varying relative abundances (Table ). Sterols were below detection in the DLs Sites 1 and 6 (Table ) and were mainly composed by phosphatidyl inositol (PI‐DAG), phosphatidyl choline, phosphatidyl ethanolamine, phosphatidyl glycerol and OH‐PC‐DAG (Figure ). Some of the IPLs detected presented multiple hydroxylations; these include betaine lipids, ornithine and trimethyl ornithine lipids and phosphatidyl choline lipids (Table ). Betaine lipids were also observed with mixed ether and ester side chains and phosphatidyl ethanolamine as diether diacylglycerol, which suggests a bacterial origin for these lipids (Schubotz et al., , ). It is important to note that while micro‐climatic conditions of the sites at the sampling time did not show any significant correlation with the content of IPLs groups (Pearson correlation p > 0.05), the RH of BSCs was significantly correlated with the content of glycolipids, phospholipids and betaine lipids ( p < 0.05 and R 2 , 0.89–0.91). No significant correlations were detected between the physico‐chemistry of DLs and the extracted IPLs. Major fatty acid carbon chain lengths (Table ) and distributions of free fatty acids (Table ) were also analysed. The most dominant combined carbon chain lengths of all IPL were C36:3, C36:4 and C34:3, C34:2, C34:1, C32:2 and C32:3, which matches with the distribution of the dominant free fatty acids C16:0, C18:1 and C18:2 (Figure and Table ). Structure and assembly of bacterial communities associated with BSCs and DLs The bacterial communities associated with BSC and DL samples (3769 and 3886 OTUs, respectively; Table ) were dominated by the presence of few abundant OTUs (9 and 11 OTUs, respectively, with relative abundance >1%) that accounted for up to 14% and 20% of the total number of reads, respectively (Figure ). A long tail of rare OTUs with relative abundance ranging from 1% to 0.001% was detected in all the bacterial communities (3760 and 3875, respectively; Pareto‐like distribution in Figure ). Most OTUs belonged to the phyla Proteobacteria (relative abundance, 26% and 24.1% in BSCs and DLs, respectively), Bacteroidetes (15.7% and 15.3%), Firmicutes (11.4% and 15.3%), Planctomycetes (8.7% and 7.7%), Acidobacteria (7.1% and 8.3%), Actinobacteria (5.9% and 9.4%), Cyanobacteria (6.9% and 3.3%), Chloroflexi (3.9% and 3.3%) and Armatimonadetes (3.8% and 2.2%; Figure ). The bacterial communities associated with DL samples exhibited a higher diversity compared to those with BSCs in terms of species richness (number of OTUs; Mann–Whitney test, p = 0.0008), while Shannon's and Simpson's indices did not differ among the two types of samples ( p = 0.51 and p = 0.48, respectively; Figure ). The most abundant OTUs were conserved and consistently present in both BSCs and DLs (99% of shared OTUs and 91% of generalist OTUs, the latest defined as the OTUs equally distributed among BSCs and DLs; Figures ). Despite the high number of shared and generalist OTUs, the bacterial communities that inhabited BSCs and DLs differed significantly (PERMANOVA: F 1,34 = 2.94, p = 0.006; CAP cross‐validation 100%; variation explained, 25%). This specificity was explained by the differential distribution pattern (enrichment and depletion) between BSCs and DLs of 10% of OTUs (Figure ); 7% ( n = 256) and 3% ( n = 110) of OTUs were enriched in DLs and BSCs, respectively, and accounted for 8% and 10% of the total relative abundance in each type of samples. Among these, members of Alphaproteobacteria (27.7%), Oxyphotobacteria within cyanobacteria (29%) and Bacteroidia (12%) dominated the BSC‐enriched community, while Acidobacteria (26%; Blastocatellia 12.5% and Holophagae 7.4%), Actinobacteria (16%; Acidimicrobiia 11.8% and Thermoleophilia 3.7%) and Gammaproteobacteria (10.5%) were the main members of the DL‐enriched group (Figures ). We further evaluated the fitting of BSC and DL bacterial communities to the neutral community model (Sloan et al., ). While the neutral model failed to fit the frequency of occurrence of the BSC bacterial community (coefficient of the neutral fit, R 2 < 0 and migration rate, m = 0.0218; Figure ), it poorly fit that of DL ( R 2 = 0.16 and m = 0.0435; Figure ). These results indicated that in both cases, a more substantial role of habitat (niche) filtering compared to stochastic processes drives the bacterial community assembly, that is, enrichment/exclusion of species with appropriate/inappropriate traits for given abiotic and biotic environmental conditions. In addition, when the source‐sink hypothesis was tested using DL bacterial communities as the source and BSC bacterial communities as the sink, the occurrence frequency of bacterial taxa did not fit the neutral community model ( R 2 < 0 and m = 0.019; Figure ): the composition of BSCs is not driven by a source‐sink relationship with DL, and the probability that an individual who dies or leaves the BSC community is replaced by another individual immigrating from the DL community is limited. We ran the same analysis by considering BSC and DL bacterial communities as source and sink, respectively, and also in this case we rejected the source‐sink hypothesis (Figure ). Even though the source‐sink hypotheses were rejected in both the combinations (DL vs. BSC and BSC vs. DL), we found that a limited part of the BSC bacteria community potentially originated/transferred from the below DL layer based on the community‐wide Bayesian model (SourceTracker; Figure ). However, this portion decreased with the increasing thickness of the BSCs (regression in Figure ). The remaining BSC taxa could not be explained by DL and were identified by SourceTracker as ‘unknown’. BSCs and DLs bacterial co‐occurrence The network properties of the BSCs and DLs samples were largely comparable in size (number of nodes and total interactions) and had similar topological features (Table ; Figure ). For instance, OTUs engaging in significant associations represented 42% and 33% of the total OTUs present in the original community of BSC and DL, respectively. The associations (edges) were equally distributed among positive (co‐presence) and negative (mutual exclusion) interactions (Table ). Both co‐occurrence networks followed a power‐law degree distribution in which most nodes had few connections (degree 1–5, 74% and 69% of nodes in BSCs and DLs, respectively) and only few nodes had numerous connections (Figure ), suggesting the presence of a non‐random co‐occurrence pattern. Compared to the BSC network, the network of DLs showed higher degrees (Welch‐corrected test: t = 2.37, df = 661.1, p = 0.018) and higher density, along with lower path length and modularity (Table ; Figure ). In addition, the DL nodes had lower betweenness ( t = 4.57, df = 828.51, p < 0.0001), higher closeness ( t = 2.016, df = 866, p = 0.044) and eccentricity ( t = 3.40, df = 860.6, p = 0.0007) than those in BSCs samples (Figures ). These data indicate that the members of the DL‐associated bacterial communities established more complex, well‐connected and closer interactions compared to those occurring in BSCs. The network interactions in BSCs and DLs showed a comparable taxonomic profile (proportion of node per phylum; Figure ) in agreement with the phylogenetic analysis previously described (Figure ). The largest number of nodes was represented by members affiliated to Firmicutes, Bacteroidetes, Alphaproteobacteria , Acidobacteria and Actinobacteria, followed by Planctomycetes, Gammaproteobacteria , Chloroflexi, Cyanobacteria, Verrrucomicrobia and Armatimonadetes, and members of minor groups (i.e., Deltaproteobacteria , Deinococcus‐Thermus, FBP, Gemmatimonadetes, Omnitrophicaeota, Patescibacteria and WPS‐2). The phyla/classes detected with the higher node frequency were also responsible for the principal interactions within the networks, as revealed by the positive relationship between the number of nodes and degree per each taxonomic group (BSCs: p < 0.0001, R 2 = 0.76; DLs: p < 0.0001, R 2 = 0.86; Figure ). A large cluster of densely connected nodes of Firmicutes, (classes of Bacilli , Clostridia , Erysipelotrichia and Negativicutes ), Bacteroidetes, Actinobacteria and Acidobacteria was present in both BSCs and DLs networks (Figure ). Keystone species (here defined as nodes with >1% of total degree) belonged to Firmicutes (3 OTUs; Clostridia , Negativicutes and Erysipelotrichia ), Patescibacteria (1 OTU; Saccharimonadia ), Acidobacteria (1 OTUs; Acidobacteriia ) and Bacteroidetes (1 OTU; Bacteroidia ) in BSCs, while they were affiliated to Acidobacteria (3 OTUs; Acidobacteriia ), Actinobacteria (1 OTUs; Actinobacteria and Coriobacteria ), Bacteroidetes (2 OTUs; Bacteroidia ), Firmicutes (12 OTUs; Clostridia and Negativicutes ) and Proteobacteria (2 OTUs; Gammaproteobacteria ) in DLs. Despite a large compositional similarity among the bacterial members interacting in the BSC and DL networks, substantial differences were detected in the proportion (twofold enrichment/depletion) of nodes that belong to the phylum Armatimonadetes and Verrucomicrobia, and in the degrees of Planctomycetes nodes (Figure ). Micro‐environmental niche effect in BSCs and DLs associated bacterial communities The components of beta‐diversity were analysed to identify the ecological processes that govern the assembly of bacterial communities in BSCs and DLs. Similar processes were identified in the aboveground BSCs and belowground DLs: the dominant processes are species replacement (i.e., mediated by the heterogeneity of the micro‐environments represented by each site) and similarity (i.e., due to the overall harsh environmental conditions), while richness difference was the less represented process (Figure ). We inferred the mechanisms regulating the bacterial community assembly with a phylogenetic bin‐based null model (iCAMP) and found that it was consistently dominated by dispersal limitation and homogeneous selection in both BSCs (47% and 25%, respectively) and DLs (53% and 23%, respectively). Such mechanisms defined significantly different BSCs and DLs bacterial communities across the six sites (PERMANOVA: F 5,12 = 6.29, p = 0.001 and F 5,12 = 4.57, p = 0.001 in BSCs and DLs, respectively; Figures ; CAP cross‐validation 100%; pairwise comparison in Table ). Biodiversity analysis performed separately on the six sites showed that the number of shared bacterial OTUs in BSCs and DLs was 62% and 74%, respectively. We consistently detected site‐specific OTUs (range of site‐specific OTUs: 0–122 and 0–107 in BSCs and DLs, respectively), along with site‐enriched and site‐co‐shared taxa (Figures , and ). LEfSe analysis detected a total of 26 bacterial discriminants that changed according to the sites in BSCs‐associated bacterial communities (Figure ). Members of Cyanobacteria were the bacterial discriminants in Site 1, those of Verrucomicrobia, Chloroflexi, Firmicutes, Acidobacteria and Alphaproteobacteria in Site 2, those of Chloroflexi, Armatimonadetes and Cyanobacteria in Site 3, Plantomycetes, Bacteroidetes, Chloroflexi and Cyanobacteria in Site 4, and those of Actinobacteria, Acidobacteria, Alphaproteobacteria and Firmicutes in Site 6 (details in Figure ). Notably, no bacterial discriminants were detected in the BSC of Site 5 nor in all the six DL sites. SIMPER analysis was also performed to unveil the OTUs that contributed predominantly to the site diversity. In BSCs‐associated bacterial microbiomes, 43 OTUs (over 3769) played a crucial role, contributing up to 10% of dissimilarity. Among these, members of 22 families were identified, including Acetobacteraceae , Bacteroidaceae , Burkholderiaceae , Ruminococcaceae and Sphingomonadaceae as the dominants. On the other hand, the differences between sites in DL bacterial communities were mainly explained by the differential distribution of 54 OTUs (over 3886) belonging to 29 families (e.g., Bacteroidaceae , Bifidobacteriaceae , Burkholderiaceae , Chitinophagaceae , Ruminococcaceae and Lachnospiraceae ) and contributing to 10% of the observed dissimilarity. FAPROTAX analysis provided insights into the predicted bacterial ecological functions (Figure ). This showed a consistently widespread distribution among sites (both BSC and DL) of chemoheterotrophic metabolisms (i.e., aerobic chemoheterotrophy and chemoheterotrophy) and a less yet widespread distribution of fermentative metabolisms. Photosynthetic metabolisms were found in several but not in all BSC samples and in one of the DL samples. The overall picture indicates a general site specificity for the deduced metabolisms in the BSC and DL layers of the examined sediments. Factor shaping diversity of bacterial communities associated with BSC and DL among sites The role of the micro‐niches in shaping the associated‐bacterial communities was further shown by the linear decrease of bacterial similarity with the increasing differences among the topographical (i.e., distance among sites; BSC: p < 0.0001, R 2 = 0.2, slope = −0.096, DL: p < 0.0001, R 2 = 0.15, slope = −0.07; Figures ), physico‐chemical (BSC: p < 0.0001, R 2 = 0.32, slope = −14.39, DL: p < 0.0001, R 2 = 0.23, slope = −10.89; Figures ), micro‐climatic (BSC: p = 0.0001, R 2 = 0.09, slope = −0.25, DL: p = 0.0024, R 2 = 0.06, slope = −0.17; Figures ) and IPL signature (BSC: p = 0.0006, R 2 = 0.08, slope = −1.46, DL: p = 0.023, R 2 = 0.03, slope = −2.09; Figures ) characteristics of the different sites. These results indicate that the unique conditions defining the different sites are important drivers for structuring bacterial communities. To further understand the forces that shape the BSC and DL bacterial community variance among sites, we analysed the contribution of the physico‐chemical variables. We focused on the physico‐chemical variables because they were the most important in explaining the observed diversity according to the slopes of the regressions (Figure ). In BSCs, pH, DOC and crust RH (RH%) were the principal physico‐chemical variables (AICc = 105.79, R 2 = 0.63, Table ) that explained up to 40% of the variance (Figure ). Correlation analysis showed that (i) DOC had a significantly positive correlation with Proteobacteria phylum, Reyranellales and Anaerolineae RBG‐13‐54‐9 orders, Elsteraceae , Reyranellaceae and Myxococcales P3OB‐42 families, and Brevundimonas genus; (ii) pH was positively correlated with Cyanobacteria and Deinococcus‐Thermus phyla, Deinococci and Oxyphotobacteria classes, Blastocatellales and Deinococcales orders, Blastocatellaceae , Intrasporangiaceae , Deinococcaceae and Hymenobacteraceae families, and Nostoc PCC‐73102, Deinococcus , Hymenobacter and Gemmatirosa genera, while it was negatively correlated with Microgenomatia class, WD260 order, CPla‐3 termite group and KD3‐93 families, and Acidipila , Phenylobacterium , Rhizobacter and Telmatocola genera; (iii) RH was positively correlated with members of Omnitrophicaeota phylum, Dojkabacteria WS6 class, Steroidobacterales order, Steroidobacteraceae , Terrimicrobiaceae and Sandaracinaceae families, and Tahibacter , Terrimicrobium and Pelosinus genera. In the case of DLs, the variability of bacterial communities was mainly explained by Fe, P and K, (AICc = 127.12, R 2 = 0.46, Table ) which account for almost 30% of the entire variance (Figure ): (i) Fe was negatively correlated with the presence of members of Micavibrionales order, Xanthobacteraceae and Simkaniaceae families, and Pseudolabrys genus; (ii) P was positively correlated with members of Gracilibacteria class, Micavibrionales order, Elsteraceae and Micavibrionaceae families, and Polycyclovorans genus; (iii) K was positively correlated with the members of Xanthomonadales order.
The Lobuche glacier moraine is constituted by a randomly distributed patchwork of rocks with variable topographical features that is the norm across the debris‐mantled ablation zone of the glacier. In an area of 50 m 2 , we selected six sites in which greyish/dark BSCs develop on top of the mineral substrates DL (Figures ). The thickness of BSCs ranged from 0.8 to 5 mm, while DLs varied depending on the level of the underlying permafrost and ranged from <1 to 60 cm (Table ). Evaluation of climatic parameters revealed that each site was characterized by unique micro‐environmental conditions in terms of time and intensity of exposition to light, temperature and RH (Figure ). Due to their position within the studied area (Figure ), the sites were differently exposed to the solar irradiation (Figures and ): sites 1, 3 and 5 received, on the day of sampling, about 8 h of sunlight per day (20,824 ± 425 PYR and 10,720 ± 291 PAR), while sites 2 and 6 received 6 h (15,181 ± 1204 PYR and 7360 ± 1154 PAR) and Site 4 received only 4 h (10,618 PYR and 5803 PAR) because partially shadowed by the surrounding rocks. Variable daily temperatures (25 ± 13°C) were also observed, reaching values over 30°C in sites 1, 3 and 5; while values below 20°C were recorded in sites 2, 4 and 6 (Figure ). All the sites had similar minimal temperatures of air (−7 ± 1°C) during the night. Air RH showed a specific pattern in which higher values (>50%) were observed in sites 3, 4 and 5, which were located more in‐depth compared to the average level of the moraine (Figure ), while sites 1, 2 and 6 showed values <50%; the minimum RH ranged between 13% and 27% across all sites (Figure ).
BSCs and DLs Chemical microanalysis in conjunction with SEM (Figure ) revealed that BSCs presented on their surface mineral and organic constituents tightly connected. Calcium carbonate nodules were also observed in BSCs, along with nodules constituted by coalescent carbonate microtubules (Figures ). In addition, nuclear magnetic resonance (NMR) analysis revealed that the organic matter present in the BSCs was dominated by O‐CH 3 or N‐alkyl, O‐alkyl C and di‐O‐alkyl C (from 60.91% to 72.65%), and aliphatic C bonded to other aliphatic chain or H (from 11.44% to 19.55%) but with different relative abundance across the six sites (Table ). The physico‐chemical characteristics (Table ) significantly differed across the sites (PERMANOVA: F 4,10 = 95.01, p = 0.001; Figure ; pairwise comparison in Table ). Among the chemical variables measured, DOC, RH and carbon (C) explained most of the observed chemical–physical diversity of BSCs across sites (up to 94%, Table ). The DLs were consistently composed of thin grains of quartz, feldspar and mica (Table ), while carbon was not detectable (Table ). The DLs samples had high sand content (>90%) followed by a small proportion of loam (5%–6%) and clay (0.5%–5%), except for Site 5 where the latest represented up to 26% and 11% of the substrate material, respectively (Table ). Despite similarity in term of texture, DLs had different compositions across the six sites (Table ; PERMANOVA: F 5,12 = 31.71, p = 0.001; pairwise comparison in Table ). The most important variables that characterized such diversity among DLs in terms of physico‐chemical parameters were TKN and ion content (Fe, Na, TKN, Al and K; Table ).
BSCs and DLs IPLs are major components of cells membrane and provide information about the living microbial community in the environmental samples (Schubotz et al., ). Based on their quantification, we found that the microbial biomass was mainly associated with BSCs, reaching up to 30‐fold higher levels than DLs (Figure ; average fold change, 8). The IPLs composition showed a separation between BSC and DL samples (CAP, choice of m = 2, trace statistic = 0.31, p = 0.034; PCoA in Figure ). Among IPLs, glycolipids (i.e., monoglycosyl diacylglycerod, diglycosyl diacylglycerol, triglycosyl diacylglycerol, sulfoquinovosyl diacylglycerol and glycoronic acid diacylglycerol) which are found in thylakoid membranes of all phototrophic organisms (Hölzl & Dörmann, ) were the most abundant lipid class: on average they represented 33% and 27% of total IPLs in BSCs and DLs, respectively (Figure ). Notably, the heterocyst glycolipids, a subgroup of glycolipids typical of nitrogen‐fixing filamentous cyanobacteria (Nichols & Wood, ), constituted 23% (range, 7%–42%) and 15% (6%–32%) of the total IPLs in BSCs and DLs, respectively. Betaine lipids, typical of lower plants and of some cyanobacteria (Klug & Benning, ; Řezanka et al., ), were abundant at all sites, representing up to 30% of total IPLs (Figure ). High relative levels of ornithine lipids, particularly trimethyl ornithine lipids that are exclusive bacterial markers (López‐Lara & Geiger, ; Moore, ), were observed in BSCs of sites 2, 3 and 6 (relative abundance range, 2%–3.8%). Even though eukaryotic biomass (e.g., mosses, lichens and bryophytes) was not visible in BSCs, the detection of sterols revealed its presence. The distribution of sterols (mainly ergosterol, stigmasterol and sitosterol) are comparable to what has been reported for Sphagnum species (Bryophyta) found in peat bogs (Baas et al., ) and were mainly detected in the BSCs of all sites, in varying relative abundances (Table ). Sterols were below detection in the DLs Sites 1 and 6 (Table ) and were mainly composed by phosphatidyl inositol (PI‐DAG), phosphatidyl choline, phosphatidyl ethanolamine, phosphatidyl glycerol and OH‐PC‐DAG (Figure ). Some of the IPLs detected presented multiple hydroxylations; these include betaine lipids, ornithine and trimethyl ornithine lipids and phosphatidyl choline lipids (Table ). Betaine lipids were also observed with mixed ether and ester side chains and phosphatidyl ethanolamine as diether diacylglycerol, which suggests a bacterial origin for these lipids (Schubotz et al., , ). It is important to note that while micro‐climatic conditions of the sites at the sampling time did not show any significant correlation with the content of IPLs groups (Pearson correlation p > 0.05), the RH of BSCs was significantly correlated with the content of glycolipids, phospholipids and betaine lipids ( p < 0.05 and R 2 , 0.89–0.91). No significant correlations were detected between the physico‐chemistry of DLs and the extracted IPLs. Major fatty acid carbon chain lengths (Table ) and distributions of free fatty acids (Table ) were also analysed. The most dominant combined carbon chain lengths of all IPL were C36:3, C36:4 and C34:3, C34:2, C34:1, C32:2 and C32:3, which matches with the distribution of the dominant free fatty acids C16:0, C18:1 and C18:2 (Figure and Table ).
BSCs and DLs The bacterial communities associated with BSC and DL samples (3769 and 3886 OTUs, respectively; Table ) were dominated by the presence of few abundant OTUs (9 and 11 OTUs, respectively, with relative abundance >1%) that accounted for up to 14% and 20% of the total number of reads, respectively (Figure ). A long tail of rare OTUs with relative abundance ranging from 1% to 0.001% was detected in all the bacterial communities (3760 and 3875, respectively; Pareto‐like distribution in Figure ). Most OTUs belonged to the phyla Proteobacteria (relative abundance, 26% and 24.1% in BSCs and DLs, respectively), Bacteroidetes (15.7% and 15.3%), Firmicutes (11.4% and 15.3%), Planctomycetes (8.7% and 7.7%), Acidobacteria (7.1% and 8.3%), Actinobacteria (5.9% and 9.4%), Cyanobacteria (6.9% and 3.3%), Chloroflexi (3.9% and 3.3%) and Armatimonadetes (3.8% and 2.2%; Figure ). The bacterial communities associated with DL samples exhibited a higher diversity compared to those with BSCs in terms of species richness (number of OTUs; Mann–Whitney test, p = 0.0008), while Shannon's and Simpson's indices did not differ among the two types of samples ( p = 0.51 and p = 0.48, respectively; Figure ). The most abundant OTUs were conserved and consistently present in both BSCs and DLs (99% of shared OTUs and 91% of generalist OTUs, the latest defined as the OTUs equally distributed among BSCs and DLs; Figures ). Despite the high number of shared and generalist OTUs, the bacterial communities that inhabited BSCs and DLs differed significantly (PERMANOVA: F 1,34 = 2.94, p = 0.006; CAP cross‐validation 100%; variation explained, 25%). This specificity was explained by the differential distribution pattern (enrichment and depletion) between BSCs and DLs of 10% of OTUs (Figure ); 7% ( n = 256) and 3% ( n = 110) of OTUs were enriched in DLs and BSCs, respectively, and accounted for 8% and 10% of the total relative abundance in each type of samples. Among these, members of Alphaproteobacteria (27.7%), Oxyphotobacteria within cyanobacteria (29%) and Bacteroidia (12%) dominated the BSC‐enriched community, while Acidobacteria (26%; Blastocatellia 12.5% and Holophagae 7.4%), Actinobacteria (16%; Acidimicrobiia 11.8% and Thermoleophilia 3.7%) and Gammaproteobacteria (10.5%) were the main members of the DL‐enriched group (Figures ). We further evaluated the fitting of BSC and DL bacterial communities to the neutral community model (Sloan et al., ). While the neutral model failed to fit the frequency of occurrence of the BSC bacterial community (coefficient of the neutral fit, R 2 < 0 and migration rate, m = 0.0218; Figure ), it poorly fit that of DL ( R 2 = 0.16 and m = 0.0435; Figure ). These results indicated that in both cases, a more substantial role of habitat (niche) filtering compared to stochastic processes drives the bacterial community assembly, that is, enrichment/exclusion of species with appropriate/inappropriate traits for given abiotic and biotic environmental conditions. In addition, when the source‐sink hypothesis was tested using DL bacterial communities as the source and BSC bacterial communities as the sink, the occurrence frequency of bacterial taxa did not fit the neutral community model ( R 2 < 0 and m = 0.019; Figure ): the composition of BSCs is not driven by a source‐sink relationship with DL, and the probability that an individual who dies or leaves the BSC community is replaced by another individual immigrating from the DL community is limited. We ran the same analysis by considering BSC and DL bacterial communities as source and sink, respectively, and also in this case we rejected the source‐sink hypothesis (Figure ). Even though the source‐sink hypotheses were rejected in both the combinations (DL vs. BSC and BSC vs. DL), we found that a limited part of the BSC bacteria community potentially originated/transferred from the below DL layer based on the community‐wide Bayesian model (SourceTracker; Figure ). However, this portion decreased with the increasing thickness of the BSCs (regression in Figure ). The remaining BSC taxa could not be explained by DL and were identified by SourceTracker as ‘unknown’.
and DLs bacterial co‐occurrence The network properties of the BSCs and DLs samples were largely comparable in size (number of nodes and total interactions) and had similar topological features (Table ; Figure ). For instance, OTUs engaging in significant associations represented 42% and 33% of the total OTUs present in the original community of BSC and DL, respectively. The associations (edges) were equally distributed among positive (co‐presence) and negative (mutual exclusion) interactions (Table ). Both co‐occurrence networks followed a power‐law degree distribution in which most nodes had few connections (degree 1–5, 74% and 69% of nodes in BSCs and DLs, respectively) and only few nodes had numerous connections (Figure ), suggesting the presence of a non‐random co‐occurrence pattern. Compared to the BSC network, the network of DLs showed higher degrees (Welch‐corrected test: t = 2.37, df = 661.1, p = 0.018) and higher density, along with lower path length and modularity (Table ; Figure ). In addition, the DL nodes had lower betweenness ( t = 4.57, df = 828.51, p < 0.0001), higher closeness ( t = 2.016, df = 866, p = 0.044) and eccentricity ( t = 3.40, df = 860.6, p = 0.0007) than those in BSCs samples (Figures ). These data indicate that the members of the DL‐associated bacterial communities established more complex, well‐connected and closer interactions compared to those occurring in BSCs. The network interactions in BSCs and DLs showed a comparable taxonomic profile (proportion of node per phylum; Figure ) in agreement with the phylogenetic analysis previously described (Figure ). The largest number of nodes was represented by members affiliated to Firmicutes, Bacteroidetes, Alphaproteobacteria , Acidobacteria and Actinobacteria, followed by Planctomycetes, Gammaproteobacteria , Chloroflexi, Cyanobacteria, Verrrucomicrobia and Armatimonadetes, and members of minor groups (i.e., Deltaproteobacteria , Deinococcus‐Thermus, FBP, Gemmatimonadetes, Omnitrophicaeota, Patescibacteria and WPS‐2). The phyla/classes detected with the higher node frequency were also responsible for the principal interactions within the networks, as revealed by the positive relationship between the number of nodes and degree per each taxonomic group (BSCs: p < 0.0001, R 2 = 0.76; DLs: p < 0.0001, R 2 = 0.86; Figure ). A large cluster of densely connected nodes of Firmicutes, (classes of Bacilli , Clostridia , Erysipelotrichia and Negativicutes ), Bacteroidetes, Actinobacteria and Acidobacteria was present in both BSCs and DLs networks (Figure ). Keystone species (here defined as nodes with >1% of total degree) belonged to Firmicutes (3 OTUs; Clostridia , Negativicutes and Erysipelotrichia ), Patescibacteria (1 OTU; Saccharimonadia ), Acidobacteria (1 OTUs; Acidobacteriia ) and Bacteroidetes (1 OTU; Bacteroidia ) in BSCs, while they were affiliated to Acidobacteria (3 OTUs; Acidobacteriia ), Actinobacteria (1 OTUs; Actinobacteria and Coriobacteria ), Bacteroidetes (2 OTUs; Bacteroidia ), Firmicutes (12 OTUs; Clostridia and Negativicutes ) and Proteobacteria (2 OTUs; Gammaproteobacteria ) in DLs. Despite a large compositional similarity among the bacterial members interacting in the BSC and DL networks, substantial differences were detected in the proportion (twofold enrichment/depletion) of nodes that belong to the phylum Armatimonadetes and Verrucomicrobia, and in the degrees of Planctomycetes nodes (Figure ).
BSCs and DLs associated bacterial communities The components of beta‐diversity were analysed to identify the ecological processes that govern the assembly of bacterial communities in BSCs and DLs. Similar processes were identified in the aboveground BSCs and belowground DLs: the dominant processes are species replacement (i.e., mediated by the heterogeneity of the micro‐environments represented by each site) and similarity (i.e., due to the overall harsh environmental conditions), while richness difference was the less represented process (Figure ). We inferred the mechanisms regulating the bacterial community assembly with a phylogenetic bin‐based null model (iCAMP) and found that it was consistently dominated by dispersal limitation and homogeneous selection in both BSCs (47% and 25%, respectively) and DLs (53% and 23%, respectively). Such mechanisms defined significantly different BSCs and DLs bacterial communities across the six sites (PERMANOVA: F 5,12 = 6.29, p = 0.001 and F 5,12 = 4.57, p = 0.001 in BSCs and DLs, respectively; Figures ; CAP cross‐validation 100%; pairwise comparison in Table ). Biodiversity analysis performed separately on the six sites showed that the number of shared bacterial OTUs in BSCs and DLs was 62% and 74%, respectively. We consistently detected site‐specific OTUs (range of site‐specific OTUs: 0–122 and 0–107 in BSCs and DLs, respectively), along with site‐enriched and site‐co‐shared taxa (Figures , and ). LEfSe analysis detected a total of 26 bacterial discriminants that changed according to the sites in BSCs‐associated bacterial communities (Figure ). Members of Cyanobacteria were the bacterial discriminants in Site 1, those of Verrucomicrobia, Chloroflexi, Firmicutes, Acidobacteria and Alphaproteobacteria in Site 2, those of Chloroflexi, Armatimonadetes and Cyanobacteria in Site 3, Plantomycetes, Bacteroidetes, Chloroflexi and Cyanobacteria in Site 4, and those of Actinobacteria, Acidobacteria, Alphaproteobacteria and Firmicutes in Site 6 (details in Figure ). Notably, no bacterial discriminants were detected in the BSC of Site 5 nor in all the six DL sites. SIMPER analysis was also performed to unveil the OTUs that contributed predominantly to the site diversity. In BSCs‐associated bacterial microbiomes, 43 OTUs (over 3769) played a crucial role, contributing up to 10% of dissimilarity. Among these, members of 22 families were identified, including Acetobacteraceae , Bacteroidaceae , Burkholderiaceae , Ruminococcaceae and Sphingomonadaceae as the dominants. On the other hand, the differences between sites in DL bacterial communities were mainly explained by the differential distribution of 54 OTUs (over 3886) belonging to 29 families (e.g., Bacteroidaceae , Bifidobacteriaceae , Burkholderiaceae , Chitinophagaceae , Ruminococcaceae and Lachnospiraceae ) and contributing to 10% of the observed dissimilarity. FAPROTAX analysis provided insights into the predicted bacterial ecological functions (Figure ). This showed a consistently widespread distribution among sites (both BSC and DL) of chemoheterotrophic metabolisms (i.e., aerobic chemoheterotrophy and chemoheterotrophy) and a less yet widespread distribution of fermentative metabolisms. Photosynthetic metabolisms were found in several but not in all BSC samples and in one of the DL samples. The overall picture indicates a general site specificity for the deduced metabolisms in the BSC and DL layers of the examined sediments.
BSC and DL among sites The role of the micro‐niches in shaping the associated‐bacterial communities was further shown by the linear decrease of bacterial similarity with the increasing differences among the topographical (i.e., distance among sites; BSC: p < 0.0001, R 2 = 0.2, slope = −0.096, DL: p < 0.0001, R 2 = 0.15, slope = −0.07; Figures ), physico‐chemical (BSC: p < 0.0001, R 2 = 0.32, slope = −14.39, DL: p < 0.0001, R 2 = 0.23, slope = −10.89; Figures ), micro‐climatic (BSC: p = 0.0001, R 2 = 0.09, slope = −0.25, DL: p = 0.0024, R 2 = 0.06, slope = −0.17; Figures ) and IPL signature (BSC: p = 0.0006, R 2 = 0.08, slope = −1.46, DL: p = 0.023, R 2 = 0.03, slope = −2.09; Figures ) characteristics of the different sites. These results indicate that the unique conditions defining the different sites are important drivers for structuring bacterial communities. To further understand the forces that shape the BSC and DL bacterial community variance among sites, we analysed the contribution of the physico‐chemical variables. We focused on the physico‐chemical variables because they were the most important in explaining the observed diversity according to the slopes of the regressions (Figure ). In BSCs, pH, DOC and crust RH (RH%) were the principal physico‐chemical variables (AICc = 105.79, R 2 = 0.63, Table ) that explained up to 40% of the variance (Figure ). Correlation analysis showed that (i) DOC had a significantly positive correlation with Proteobacteria phylum, Reyranellales and Anaerolineae RBG‐13‐54‐9 orders, Elsteraceae , Reyranellaceae and Myxococcales P3OB‐42 families, and Brevundimonas genus; (ii) pH was positively correlated with Cyanobacteria and Deinococcus‐Thermus phyla, Deinococci and Oxyphotobacteria classes, Blastocatellales and Deinococcales orders, Blastocatellaceae , Intrasporangiaceae , Deinococcaceae and Hymenobacteraceae families, and Nostoc PCC‐73102, Deinococcus , Hymenobacter and Gemmatirosa genera, while it was negatively correlated with Microgenomatia class, WD260 order, CPla‐3 termite group and KD3‐93 families, and Acidipila , Phenylobacterium , Rhizobacter and Telmatocola genera; (iii) RH was positively correlated with members of Omnitrophicaeota phylum, Dojkabacteria WS6 class, Steroidobacterales order, Steroidobacteraceae , Terrimicrobiaceae and Sandaracinaceae families, and Tahibacter , Terrimicrobium and Pelosinus genera. In the case of DLs, the variability of bacterial communities was mainly explained by Fe, P and K, (AICc = 127.12, R 2 = 0.46, Table ) which account for almost 30% of the entire variance (Figure ): (i) Fe was negatively correlated with the presence of members of Micavibrionales order, Xanthobacteraceae and Simkaniaceae families, and Pseudolabrys genus; (ii) P was positively correlated with members of Gracilibacteria class, Micavibrionales order, Elsteraceae and Micavibrionaceae families, and Polycyclovorans genus; (iii) K was positively correlated with the members of Xanthomonadales order.
In this study, we investigated six sites of primary colonization with BSCs in a 50 m 2 area of the Lobuche moraine. The study area was in the relic debris‐mantled ablation zone no anymore connected to the glacier dynamics (Jones et al., ). We chose this area because in a relatively small space it presents all hummock and depression sites with variable spatial orientation, altitude and slope that characterize the overall Lobuche moraine but minimizes the intrinsic variability associated with time in glacier‐moraine chronosequences. Thus, the sites were representative of the different topographical conditions occurring all over the moraine but did not reflect the intrinsic variability due to the time from deglaciation. Due to their topographic position in the studied area, the six sites were exposed to distinct environmental conditions in terms of irradiation, air temperature and humidity. The spatial orientation and elevation of the sites determine the level of exposure to the environmental agents (wind, sunlight irradiation, dust deposition, humidity, etc.) and so influence the ecological dynamics of the living community (Austin et al., ; Fischer & Subbotina, ; Kidron et al., ), at least in the summer season when the moraine is not under a snow cover. However, we must consider that environmental measures over a very short time window, such as those used in this study, cannot be extended all over the different seasons and our environmental measures must be considered under the limited time window they were obtained. The six sites also presented a set of specific physico‐chemical properties (organic matter, pH and soil nutrient concentrations) that make them distinct niches in which unique bacterial communities are selected both at the BSC and DL levels. In many soil ecosystems, microbial community composition has been shown to be shaped by soil physico‐chemical conditions, such as pH in Antarctica (Chu et al., ), nutrients in North American ecosystems (Ramirez et al., ) and moisture in Chinese steppe (Zhao et al., ). Nevertheless, the presence of multiple niches in a relatively small area in the glacier forefield suggests that events ongoing in the early stages of primary succession are patchy and heterogeneous, rather than coordinated and/or strongly affected by macro‐climatic conditions of the overall moraine, and these differences are mirrored in the inhabiting microbiomes. The BSCs and DLs of the Lobuche moraine were found to host different bacterial communities, which were composed of phototrophic taxa but also included a large component of non‐phototrophic microorganisms. The functionality of BSCs as pioneer colonizers in the extreme cold ecosystems of the Himalayas and elsewhere was mainly investigated for phototrophs (Janatková et al., ; Schmidt et al., ), but recent mounting evidence poses attention to the non‐photosynthetic microorganisms as contributors to ecosystem stability and multifunctionality (Ezzat et al., ; Jousset et al., ; Lynch & Neufeld, ; Wang et al., ). For instance, it has been recently shown that cryospheric ecosystems are inhabited by generalist taxa with Proteobacteria and Bacteroidota as key taxonomic groups (Bourquin et al., ; Gupta et al., ; Mapelli et al., , ; Schmidt et al., ). The occurrence of the same bacterial phyla in geographically distinct areas is indicative of their ability to adopt similar metabolic strategies to survive in oligotrophic and cold environments (Srinivas et al., ), facilitating a multitude of phototrophic, photoheterotrophic and chemolithotrophic processes (Borin et al., ; Mapelli et al., ). The BSCs and DLs of the Lobuche moraine were found to share a large number of generalist bacterial OTUs. Despite the dominance of such generalists, the consistent selective pressure of BSCs and DL affects the assembly of bacterial communities with site‐specific patterns even at short distances. For instance, among the taxa that discriminate between BSCs and DLs microbiomes, Alphaproteobacteria and Bacteroidetes were enriched in the BSC. Alphaproteobacteria prefer nutrient‐rich environments, and in deglaciating soil their abundance increase with soil age (Fernández‐Martínez et al., ), while Bacteriodetes are inhabitants of BSCs occurring in cold desert ecosystems like the Eastern Pamir (Khomutovska et al., ). Furthermore, BSCs were enriched in bacteria affiliated with Burkholderiales , typical inhabitants of ice‐dominated environments, that are seeded in the mineral substrate released by ice melting and colonize soils at initial and medium stages of development (Mapelli et al., ; Schmidt et al., ). On the contrary, DLs were enriched in Actinobacteria, Proteobacteria and Acidobacteria phyla which are among the most abundant taxa inhabiting bare soils of cold deserts worldwide (Bourquin et al., ; Leung et al., ) and were found in other Himalayan forefields (Srinivas et al., ). The limited dispersion and homogenous selection resulted in being the main ecological processes in the assembly of microbial communities in BSC and DL, suggesting that bacteria had a life–death dynamics that allowed community composition to drift apart due to spatial isolation between sites (Fodelianakis et al., ; ), as it was also observed in permafrost‐associated microbiomes in Alaskan forests (Bottos et al., ). The rejection of the neutral community model for both BSC and DL and of the source‐sink hypotheses support that bacterial community assembly is mainly deterministic rather than stochastic and it is driven by environmental factors and microbial species interactions. For instance, the distinct BSC habitat hosts bacterial communities predominantly trapped in EPS matrices to form complex structures/films covering the underlying substrate in the moraine, with consequent limitation of bacterial dispersal from DL to BSC and vice versa. An alternate interpretation of the data is that the environmental/ecological conditions of BSC and DL are not similar enough to experience comparable levels of selection (e.g., C content), as expected for those species identified as ‘neutrally distributed’. Glacier ecosystems are low‐energy input environments (Schostag et al., ) where microorganisms have to survive under sub‐zero temperatures and are subjected to freeze–thaw cycles, standing as quasi‐closed systems (Graham et al., ). Under these conditions, dispersal is reduced, and nutrients supply is governed by local dynamics involving cellular dormancy (Choudoir & DeAngelis, ) and recycling of energy by exploiting the biomass of necrotic cells (Shoemaker et al., ). During dormancy, cells are in a viable but not‐cultivable state, a low‐energy condition accompanied by a reduction in cellular size. Upon perception of favourable conditions, cells exit dormancy and restore the vegetative growth (Su et al., ). At the same time, thawing cycles may lead to cellular lysis, so the dead microorganisms may release organic compounds to fuel living cells. By tracking population dynamics under energy‐limited conditions, it was observed that cells could use the necrotic biomass to sustain growth and evolve through the natural selection (Shoemaker et al., ). This bacterial survival strategy under energy limitation seems to be conserved across the bacterial kingdom, suggesting that this trend may be common to hot and cold deserts, where niche‐filtering by the climate and edaphic factors drive community assembly (Lee et al., ; Pointing et al., ). IPL analysis deepens the knowledge of the resident microbiomes by providing information that helps in clarifying the adaptation and diversity of the entire community (Sturt et al., ). The presence in the study area of oxidized phosphatidyl choline, betaine and ornithine lipids having one to multiple hydroxylations and oxylipins indicates that microorganisms have to cope with high oxidative and photo‐oxidative stress, associated with low temperature and high irradiation (Amiraux, ). Furthermore, the predominant fatty acids had monounsaturated chains, a molecular component of the biochemical strategy used by psychrophiles to maintain cell membrane integrity in low‐temperature environments (Collins & Margesin, ). A higher abundance of glycolipids and heterocyst glycolipids in the BSCs compared to DLs is owed to the switch of glycolipid‐containing phototrophs on the surface to non‐phototroph organisms in the DLs (Kalisch et al., ). While little is known about the turnover of diacylglycerol glycolipids in soils after cell death, studies have shown that heterocyst glycolipids can be preserved in laminated sediments for millions of years (Bauersachs et al., ). Therefore, detecting heterocyst glycolipids in the deeper layers could be the trace of non‐degraded detrital material from the surface that has been buried over time. IPL analysis supports the bacterial metabolisms inferred by the 16S rRNA gene sequences dataset, which shows that phototrophic‐related functions are enriched in the BSCs compared to DLs. Besides phototrophs, representatives of the Firmicutes, Alphaproteobacteria and Gammaproteobacteria , are also capable of synthesizing glycolipids (Hölzl & Dörmann, ). While the abundant presence of trimethyl ornithine lipids can be most likely assigned to Planctomycetes in both the BSCs and DLs (Moore, ), the overall high levels of glycolipids and aminolipids suggest some nutrient limitation, especially of phosphorus (Schubotz et al., ; Van Mooy & Fredricks, ). The results of IPL and physico‐chemical analyses corroborate that in the studied area the bacterial structure and nutritional properties of the DLs are poorly affected by the overlying BSCs. According to IPL analysis, most of the microbial biomass accumulates in the BSCs, implying a faster turnover of nutrients in the crust with intermediate metabolites bound to the microbial loop and not released in the underlying mineral layer. A similar trend of biomass and nutrients accumulation within the crust surface rather than in the bare soil was observed along an elevation gradient from 5300 to 5900 m asl in the Tibetan plateau (Chu et al., ; Janatková et al., ). NMR analysis of BSCs revealed the load of O‐alkyl, alkyl and N‐alkyl carbon, indicating that the input of carbon derived from the biocrust comprises polysaccharides and aliphatic biopolymers, in agreement with the BSC‐produced extracellular polymeric substances (Mugnai et al., ; Rossi et al., ). Carbon transfer from the biocrusts to the substrate below was mainly observed in temperate conditions for well‐developed BSC like those moss‐dominated (Dümig et al., ). In the case of the Lobuche BSCs studied here, we speculate that the higher summer temperatures they experience, as opposed to DLs that are in contact with the permafrost, might enhance the nutrient turnover and consumption, thus limiting the transfer to the lower layers. Indeed, temperature strongly influences soil organic carbon formation in glacier foreland, showing a faster increase in its accumulation in forefields that experience warmer periods (Khedim et al., ). Because of the low temperatures and limited N availability in Nepalese soil, organic carbon decomposes at relatively low rates and a gradient in temperature between the BSC surface and the below‐layers could further affect such decomposition rate (Chu et al., ). Due to the limited metabolic exchanges with the overlying BSCs, DL communities may experience extreme starvation conditions for nutrients and carbon availability. The DL bacterial communities were less abundant but with a higher richness than the overlying BSC, suggestive of a reduced enrichment process (Krauze et al., ). Such microbiomes tend to establish more connected networks that may contribute to increase niche adaptation and survival success under extreme oligotrophic conditions (Dong et al., ). Higher network topological features in DL than in BSC evoke synergistic strategies through associations that optimize the exchange of limited nutrients for growth and survival (Perera et al., ). Another important parameter that significantly affected the microbial diversity of BSCs was their RH. Water content and its availability are crucial factors controlling the distribution and growth of soil microbes (Stres et al., ; Van Horn et al., ). The influence of water availability on BSC microbiomes has been documented for cyanobacteria that promptly move toward the surface of wetted soils and resume photosynthesis (Garcia‐Pichel & Pringault, ). A suitable moisture content improves microbial activity, reinforces ecological functions, promotes the formation of soil fertility islands and accelerates the process of pedogenesis (Borin et al., ; Li et al., ). The abovementioned differences in carbon, DOC and moisture in the vertical soil, concomitantly with the temperature gradients occurring in the Lobuche forefield, may strongly affect the diversity of the soil microbiomes inhabiting the crusts and the DLs.
The study of primary colonization in the forefield of the Lobuche glacier in the Himalayas plateau indicates that the biogeochemical heterogeneity of BSCs and DLs recorded at the small spatial scale in the studied area contributes to the natural diversification of the moraine that, over time, can feed the formation of the soil. Our results point out that the irregular topography of the moraine governs such heterogeneity and steers the bipartite interaction between (i) the environmental conditions (such as physico‐chemical and micro‐climatic) of the sites, and (ii) the bacterial communities inhabiting the crust and the deeper substrate layers. We propose that, besides the study of time‐dependent chronosequences, an assessment of the microscale heterogeneity at each chronosequence point should be considered and acknowledged for a comprehensive learning of pedogenesis in moraines released by glacier ice.
Conceptualization: ER, RM, MF, SB and DD. Data curation: RM and MF. Formal analysis: RM, MF, ER,BS, FS. Funding aquisition: DD, SB. Investigation: ER, RM, MF, BS, FS, FM, SC, LB, LT,FT. Project administration: SB, FA, DD. Resources: SB, DD, FA, FS, LT. Software: RM, MF. Supervision: DD. Validation: ER, RM, MF, BS, FS, FM, SC, LB, LT,FT. Visualization: ER, RM, MF. Writing‐original draft: ER, RM, MF, BS, FS, FM, LT, FA, SB, DD. Writing‐review and editing: ER, RM, MF, DD.
Authors have no conflict of interest to declare.
Appendix S1: Supporting information Click here for additional data file.
|
Stochastic and deterministic processes shape bioenergy crop microbiomes along a vertical soil niche
|
56844612-698b-4dae-9fcb-5210ad48005c
|
10099798
|
Microbiology[mh]
|
Plants are rich microbial ecosystems and important ecological engineers (Bulgarelli et al., ; Delgado‐Baquerizo et al., ; Tedersoo et al., ). These sessile organisms are anchored to the soil by their roots, which also assist in provisioning water, nutrients and minerals to plants. Root and aboveground plant tissues are populated by a rich diversity of microorganisms known as the plant microbiome. Plant microbiomes are capable of modulating plant health, growth, and development, and have been implicated in crop productivity and ecosystem functioning (Agler et al., ; Durán et al., ; Howe et al., ; Mendes et al., ; van der Heijden et al., ). Soils are the largest and most diverse reservoir of microorganisms on the planet (Bickel & Or, ; Fierer, ). Soil food webs are fuelled by autotrophic metabolism, thus, aboveground plant photosynthesis is critical to soil development. Similarly, the activities of soil microbes that feed on plant residues and exudates help to stabilize soil carbon, while simultaneously recycling nutrients necessary for plant productivity. Many factors are known to influence community assembly of microbial communities around the host. These include environment, plant species, genotype or health conditions (Fitzpatrick et al., ; Xiong et al., ; Wagner et al., ), microbial interactions, mutualism, or competition (Agler et al., ; Hassani et al., ), as well as ‘neutral’ processes, such as dispersal limitation, speciation and ecological drift (Rosindell et al., ). All these factors are likely to play a role in the establishment of microbiomes and quantitative models including neutral theory, which are becoming more popular for assessing the role of adaptation to different environments and natural selection (Burns et al., ; Venkataraman et al., ). Soil chemistry and biology are known to change with depth, yet most studies on belowground plant microbiomes are focused on the top 10 cm of soils since this is where the density of fine roots is often highest (Zhang et al., ). Nonetheless, roots of perennial plants may extend meters down into the soil profile where they are important to soil carbon and mineral turnover (York et al., ). Therefore, knowledge concerning bioenergy crops and their microbial communities, interactions and functions in deeper soils is needed (de Vries et al., ). Bioenergy crops are being researched as a sustainable alternative to fossil fuels for supplying society's energy needs. To be sustainable, bioenergy cropping systems must maintain neutral or negative CO 2 emissions (Field et al., ), increase ecosystem macro‐ (Fletcher et al., ) and micro‐diversity (da C. Jesus et al., ), require low or no inputs in terms of fertilizers (Tilman et al., ), limit soil erosion and disturbance and be productive on lands that are unsuitable for agricultural food productions (Gelfand et al., ; Howe et al., ). Research is aimed at understanding how soils and their biodiversity help plants to maintain productive and sustainable biofuel crops with low inputs on lands that are otherwise not well suited for agricultural production. Here, we present results on fungal and bacterial microbiomes in soils and roots across a 1 m soil‐depth gradient across three bioenergy cropping systems. This research leverages the Great Lakes Bioenergy Research Center's Biofuel Cropping System Experiment (BCSE) at Michigan State's Kellogg Biological Station. Specifically, we aimed to (i) investigate the effect of depth on soil and root fungal and prokaryotic microbiome diversity and structure of poplar, restored prairie and switchgrass, (ii) identify a core set of taxa for each crop and depth and (iii) identify the relationships between microbial taxa, and microbial taxa and the plant host, across the vertical soil niche. We hypothesized that soil microbial diversity would be greatest in surface soils where aboveground organic inputs are concentrated, and would decrease with depth. Given that roots are an important source of carbon belowground, we also hypothesized that microbial community similarity would increase with depth across all three biofuel crops, and overall would be over‐represented by root‐associated taxa—particularly in deep soils. Our results expand knowledge on the fundamental rules that govern microbial communities in bioenergy cropping systems and the significant impact of host plants on soil microbiomes in deep soils.
Sampling and metadata collection In spring 2018, soil cores to 1 m depth (7.6 cm diameter) were taken with a hydraulic probe (Geoprobe 540MT, Geoprobe Systems, USA) at the Kellogg Biological Station (KBS) poplar, switchgrass and prairie research sites. A total of 3 replicate cores were taken at different 5 plots (i.e., block) for each cropping system. Cores were cut by specific depth intervals (0–10, 10–25, 25–50 and 50–100 cm) and for each interval a random of 1 root and soil sample was collected throughout the entire core section. Fine roots were carefully separated from soil with the use of a sieve and fine‐tipped forceps, changing gloves between processed samples, cleansing off attached soil particles. Roots were then washed with a 0.5% Tween 20 solution, rinsed three times with sterile water, and finally wrapped in sterile paper towels and air dried at room temperature. Prior to DNA extraction, roots were powdered in 2 ml tubes using stainless steel beads on a TissueLyser II (Qiagen, USA). Overall, 60 root and soil samples were collected for each cropping system for a total of 360 samples. Cores were also analysed for total carbon (C %), total nitrogen (N%), sand (%), silt (%), clay (%), pH, PO 4 3− (ppm), K + (ppm), Ca 2+ (ppm), Mg 2+ (ppm), and cation exchange capacity (CEC, meq/100 g soil) at each depth and composited by plot (details available at https://data.sustainability.glbrc.org/protocols/158 ). DNA extraction and amplicon library preparation Genomic DNA was extracted from approximately 0.40 g of dried soils using the PowerMag® Soil DNA Isolation Kit (Qiagen, USA) following the manufacturer's instructions, and from approximately 1 g of fine (ø ≤ 0.5 mm) roots using a CTAB chloroform extraction protocol (Gardes & Bruns, ). DNAs were amplified using DreamTaq Green DNA Polymerase (Thermo Scientific, USA) with the primer sets: ITS1f–ITS4 (Gardes & Bruns, ; White et al., ) and 515F‐806R for Bacteria and Archaea (Caporaso, Lauber, et al., ), following a protocol based upon the use of frameshift primers as reported in (Benucci et al., ) and originally modified from (Lundberg et al., ). PCR products were observed through gel electrophoresis after staining with ethidium bromide and visualized with UV light. Samples were normalized with the SequalPrep Normalization Plate Kit (ThermoFisher Scientific, USA) and pooled together. The generated amplicon library was concentrated to 20:1 with Amicon Ultra 0.5 ml 50 K filters (EMDmillipore, Germany) and purified from primer dimers with Agencourt AMPure XP magnetic beads (Beckman Coulter, USA). We sequenced the amplicon library on an Illumina MiSeq instrument with the v3 600 cycles kit (Illumina, USA). Bioinformatic data analysis Raw internal transcribed spacer (ITS) and 16S reads were evaluated for quality with FastQC (Andrews, ). 16S reads were merged with PEAR (Zhang et al., ). Forward ITS were used for all downstream analyses. Reads were demultiplexed by barcode sequences in QIIME (Caporaso, Kuczynski, et al., ), and Illumina adapters and sequencing primers were removed. Reads were then quality filtered, and trimmed to equal length with Cutadapt (Edgar, ; Edgar & Flyvbjerg, ; Martin, ). After sequence read de‐replication, singletons were removed and sequences clustered into operational taxonomic units (OTUs) based on 97% similarity using the UPARSE (Edgar, ) algorithms. Taxonomy assignments were performed in CONSTAX2 (Gdanetz et al., ; Liber et al., ) against the UNITE eukaryote database, version 8.2 of 4 February 2020 (Abarenkov et al., ) and SILVA, version 138 (Quast et al., ), respectively. The ‐‐high_level_db flag in CONSTAX2 was used to identify non‐target taxa as well as OTUs unidentified at the Kingdom level (Bowsher et al., ). Non‐target taxa, OTUs not assigned to a Kingdom, and OTUs identified as either chloroplast or mitochondria in either dataset were removed from subsequent analysis. Statistical analyses We first imported summary files from ITS and 16S datasets into the R statistical environment (R Core Team, ) and merged them into phyloseq objects (McMurdie & Holmes, ). We then removed OTUs with less than 10 total sequences (Lindahl et al., ; Oliver et al., ) to protect against spurious errors, for example, tag switching and artefacts (Carlsen et al., ). Before starting the analysis, we explored the library read distribution across samples and according to different variables (Figure ). We then removed PCR and sequencing contaminants with decontam (Davis et al., ) using sequence data generated in MiSeq library negative control samples (Figure ). Rarefaction curves for ITS and 16S datasets were generated to visualize variation in sample sequencing depth (Figure ). The sequence depth was lower for deeper soils than surface soils. To address this, we removed approximately 3% of the samples having fewer library sequences, and we normalized the remaining samples adopting the cumulative sum scaling technique implemented in the metagenomeSeq R package (Paulson et al., ). OTU richness (Simpson, ) and Shannon's diversity index (Hill, ) were calculated with the function ‘specnumber’ and ‘diversity’ in vegan (Website). Shannon's index was then rescaled into a 0–1 scale to help comparison across groups using the formula EH = 1 − H log k , with k denoting the number of species (i.e., OTUs) and p i the proportional abundance of species i . To test whether depth (i.e., 0–10, 10–25, 25–50 and 50–100 cm) and niche (i.e., root, soil) affected richness and Shannon index we used factorial analysis of variance (ANOVA) (~niche * depth) or Kruskal–Wallis tests when datasets did not meet normality and/or homoscedasticity prerequisites. Beta‐diversity multivariate analyses were inspired by Anderson and Willis (Anderson & Willis, ). In particular, we used: (i) a principal coordinate analysis (PCoA) unconstrained ordination (Kruskal, ) followed by a permutational multivariate analysis of variance (i.e., PERMANOVA), to explore similarities between roots and soil samples. (ii) A canonical analysis of principal coordinates (CAP) (Anderson & Willis, ) constrained ordination to display differences in community structure explained by the factors in our model and validated with permutation tests to assess the significance of the constraints, ‘cmdscale’ in vegan R package (Website). We also calculated adjusted R 2 as an unbiased measure of the explained variance. We fit environmental vectors onto the CAP ordination with the function ‘envfit’ in vegan . (iii) An analysis of multivariate dispersion (Anderson et al., ) to test for variance homogeneity among samples and across sample groups. (iv) A taxon‐group association analysis to assess the degree of preference and significance of each OTU for a target group in relation to other groups using function ‘multipatt’ in the indicspecies R package (De Cáceres et al., ) with the IndVal.g methods that incorporates a correction for unequal group sizes. This analysis calculates two species traits: exclusivity (exclusively present in a habitat) and fidelity (present in all samples of that habitat) and an indicator value is calculated based on these traits to assess the extent to which an OTU is an indicator of a treatment or a sample group. We extracted core OTUs (i.e., frequent, more persistent taxa) across depth for each crop and across crops for each depth following the methodology proposed by Shade and Stopnisek ( ). This approach aids in the identification of core OTUs that differ between crops or depth (all taxa that are core in a group were kept even if not present in other groups). Briefly, abundance‐occupancy distributions were built for each crop and depth and core taxa identified as the set of OTUs that maximize the beta‐diversity resolution (Bray–Curtis similarity) compared to the whole dataset. To inform about stochastically or deterministically recruited community members we then fit neutral models into our OTU distributions to inform about community assembly recruitment processes (Shade & Stopnisek, ; Sloan et al., ). According to the Neutral Theory, species are ‘neutral’ in the niches they live in. Individual organisms are identical in birth, death, dispersal and speciation rates, and they are only lost or acquired randomly from the source meta‐community. Fitting microbial community composition into a neutral statistical model, which assumes community assembly is driven only by stochastic dispersal and drift, will allow us to delineate the importance of selection and neutral processes and provide a broad insight into mechanisms generating and maintaining community composition (Burns et al., ; Venkataraman et al., ). Two main coefficients were evaluated in the models: (i) the coefficient of determination ( r 2 ) and represent a measure of the goodness of fit. It ranges from 0 (no fit) to 1 (perfect fit) and is key to assess how important neutral processes are in community structure. (ii) The estimated migration rate ( m ) or the probability that a random loss of an individual in a community is replaced by dispersal from the meta‐community, as opposed to reproduction within the local community, and therefore can be considered a measure of dispersal limitation. The lower the value of m , the greater the dispersal limitation impacts community assembly. To explore co‐occurrence patterns of fungal and prokaryotic OTUs for each crop and depth, we built microbial networks of previously selected core OTUs with the ‘spiec.easi’ function in the SpiecEasi R package (Kurtz et al., ). To obtain a more accurate network modelling and for known statistical and computational reasons (i.e., rare taxa occurrences can create spurious correlations) (Barberán et al., ; Farrer et al., ) we built our network on just the core community members obtained as described above. We identified network hubs (OTUs that are central, densely connected with other OTUs in the network) and module hubs (OTUs more densely connected with module's OTUs rather than other OTUs in the network) based on the ratio between within‐module ( Zi ) and between‐module connectivity ( Pi ) and as previously shown (Andrews, ; Olesen et al., ). We used heatmaps to visualize the connection between proportions of positive and negative intra‐ and inter‐kingdom links (i.e., connections between OTUs), and relative abundances in root‐to‐root connected OTUs, for each crop and depth level. All analyses and figures were generated in R (R Core Team, ) while minimal graphical adjustments to improve figures' visibility were performed in Inkscape (Inkscape Project, ).
In spring 2018, soil cores to 1 m depth (7.6 cm diameter) were taken with a hydraulic probe (Geoprobe 540MT, Geoprobe Systems, USA) at the Kellogg Biological Station (KBS) poplar, switchgrass and prairie research sites. A total of 3 replicate cores were taken at different 5 plots (i.e., block) for each cropping system. Cores were cut by specific depth intervals (0–10, 10–25, 25–50 and 50–100 cm) and for each interval a random of 1 root and soil sample was collected throughout the entire core section. Fine roots were carefully separated from soil with the use of a sieve and fine‐tipped forceps, changing gloves between processed samples, cleansing off attached soil particles. Roots were then washed with a 0.5% Tween 20 solution, rinsed three times with sterile water, and finally wrapped in sterile paper towels and air dried at room temperature. Prior to DNA extraction, roots were powdered in 2 ml tubes using stainless steel beads on a TissueLyser II (Qiagen, USA). Overall, 60 root and soil samples were collected for each cropping system for a total of 360 samples. Cores were also analysed for total carbon (C %), total nitrogen (N%), sand (%), silt (%), clay (%), pH, PO 4 3− (ppm), K + (ppm), Ca 2+ (ppm), Mg 2+ (ppm), and cation exchange capacity (CEC, meq/100 g soil) at each depth and composited by plot (details available at https://data.sustainability.glbrc.org/protocols/158 ).
extraction and amplicon library preparation Genomic DNA was extracted from approximately 0.40 g of dried soils using the PowerMag® Soil DNA Isolation Kit (Qiagen, USA) following the manufacturer's instructions, and from approximately 1 g of fine (ø ≤ 0.5 mm) roots using a CTAB chloroform extraction protocol (Gardes & Bruns, ). DNAs were amplified using DreamTaq Green DNA Polymerase (Thermo Scientific, USA) with the primer sets: ITS1f–ITS4 (Gardes & Bruns, ; White et al., ) and 515F‐806R for Bacteria and Archaea (Caporaso, Lauber, et al., ), following a protocol based upon the use of frameshift primers as reported in (Benucci et al., ) and originally modified from (Lundberg et al., ). PCR products were observed through gel electrophoresis after staining with ethidium bromide and visualized with UV light. Samples were normalized with the SequalPrep Normalization Plate Kit (ThermoFisher Scientific, USA) and pooled together. The generated amplicon library was concentrated to 20:1 with Amicon Ultra 0.5 ml 50 K filters (EMDmillipore, Germany) and purified from primer dimers with Agencourt AMPure XP magnetic beads (Beckman Coulter, USA). We sequenced the amplicon library on an Illumina MiSeq instrument with the v3 600 cycles kit (Illumina, USA).
Raw internal transcribed spacer (ITS) and 16S reads were evaluated for quality with FastQC (Andrews, ). 16S reads were merged with PEAR (Zhang et al., ). Forward ITS were used for all downstream analyses. Reads were demultiplexed by barcode sequences in QIIME (Caporaso, Kuczynski, et al., ), and Illumina adapters and sequencing primers were removed. Reads were then quality filtered, and trimmed to equal length with Cutadapt (Edgar, ; Edgar & Flyvbjerg, ; Martin, ). After sequence read de‐replication, singletons were removed and sequences clustered into operational taxonomic units (OTUs) based on 97% similarity using the UPARSE (Edgar, ) algorithms. Taxonomy assignments were performed in CONSTAX2 (Gdanetz et al., ; Liber et al., ) against the UNITE eukaryote database, version 8.2 of 4 February 2020 (Abarenkov et al., ) and SILVA, version 138 (Quast et al., ), respectively. The ‐‐high_level_db flag in CONSTAX2 was used to identify non‐target taxa as well as OTUs unidentified at the Kingdom level (Bowsher et al., ). Non‐target taxa, OTUs not assigned to a Kingdom, and OTUs identified as either chloroplast or mitochondria in either dataset were removed from subsequent analysis.
We first imported summary files from ITS and 16S datasets into the R statistical environment (R Core Team, ) and merged them into phyloseq objects (McMurdie & Holmes, ). We then removed OTUs with less than 10 total sequences (Lindahl et al., ; Oliver et al., ) to protect against spurious errors, for example, tag switching and artefacts (Carlsen et al., ). Before starting the analysis, we explored the library read distribution across samples and according to different variables (Figure ). We then removed PCR and sequencing contaminants with decontam (Davis et al., ) using sequence data generated in MiSeq library negative control samples (Figure ). Rarefaction curves for ITS and 16S datasets were generated to visualize variation in sample sequencing depth (Figure ). The sequence depth was lower for deeper soils than surface soils. To address this, we removed approximately 3% of the samples having fewer library sequences, and we normalized the remaining samples adopting the cumulative sum scaling technique implemented in the metagenomeSeq R package (Paulson et al., ). OTU richness (Simpson, ) and Shannon's diversity index (Hill, ) were calculated with the function ‘specnumber’ and ‘diversity’ in vegan (Website). Shannon's index was then rescaled into a 0–1 scale to help comparison across groups using the formula EH = 1 − H log k , with k denoting the number of species (i.e., OTUs) and p i the proportional abundance of species i . To test whether depth (i.e., 0–10, 10–25, 25–50 and 50–100 cm) and niche (i.e., root, soil) affected richness and Shannon index we used factorial analysis of variance (ANOVA) (~niche * depth) or Kruskal–Wallis tests when datasets did not meet normality and/or homoscedasticity prerequisites. Beta‐diversity multivariate analyses were inspired by Anderson and Willis (Anderson & Willis, ). In particular, we used: (i) a principal coordinate analysis (PCoA) unconstrained ordination (Kruskal, ) followed by a permutational multivariate analysis of variance (i.e., PERMANOVA), to explore similarities between roots and soil samples. (ii) A canonical analysis of principal coordinates (CAP) (Anderson & Willis, ) constrained ordination to display differences in community structure explained by the factors in our model and validated with permutation tests to assess the significance of the constraints, ‘cmdscale’ in vegan R package (Website). We also calculated adjusted R 2 as an unbiased measure of the explained variance. We fit environmental vectors onto the CAP ordination with the function ‘envfit’ in vegan . (iii) An analysis of multivariate dispersion (Anderson et al., ) to test for variance homogeneity among samples and across sample groups. (iv) A taxon‐group association analysis to assess the degree of preference and significance of each OTU for a target group in relation to other groups using function ‘multipatt’ in the indicspecies R package (De Cáceres et al., ) with the IndVal.g methods that incorporates a correction for unequal group sizes. This analysis calculates two species traits: exclusivity (exclusively present in a habitat) and fidelity (present in all samples of that habitat) and an indicator value is calculated based on these traits to assess the extent to which an OTU is an indicator of a treatment or a sample group. We extracted core OTUs (i.e., frequent, more persistent taxa) across depth for each crop and across crops for each depth following the methodology proposed by Shade and Stopnisek ( ). This approach aids in the identification of core OTUs that differ between crops or depth (all taxa that are core in a group were kept even if not present in other groups). Briefly, abundance‐occupancy distributions were built for each crop and depth and core taxa identified as the set of OTUs that maximize the beta‐diversity resolution (Bray–Curtis similarity) compared to the whole dataset. To inform about stochastically or deterministically recruited community members we then fit neutral models into our OTU distributions to inform about community assembly recruitment processes (Shade & Stopnisek, ; Sloan et al., ). According to the Neutral Theory, species are ‘neutral’ in the niches they live in. Individual organisms are identical in birth, death, dispersal and speciation rates, and they are only lost or acquired randomly from the source meta‐community. Fitting microbial community composition into a neutral statistical model, which assumes community assembly is driven only by stochastic dispersal and drift, will allow us to delineate the importance of selection and neutral processes and provide a broad insight into mechanisms generating and maintaining community composition (Burns et al., ; Venkataraman et al., ). Two main coefficients were evaluated in the models: (i) the coefficient of determination ( r 2 ) and represent a measure of the goodness of fit. It ranges from 0 (no fit) to 1 (perfect fit) and is key to assess how important neutral processes are in community structure. (ii) The estimated migration rate ( m ) or the probability that a random loss of an individual in a community is replaced by dispersal from the meta‐community, as opposed to reproduction within the local community, and therefore can be considered a measure of dispersal limitation. The lower the value of m , the greater the dispersal limitation impacts community assembly. To explore co‐occurrence patterns of fungal and prokaryotic OTUs for each crop and depth, we built microbial networks of previously selected core OTUs with the ‘spiec.easi’ function in the SpiecEasi R package (Kurtz et al., ). To obtain a more accurate network modelling and for known statistical and computational reasons (i.e., rare taxa occurrences can create spurious correlations) (Barberán et al., ; Farrer et al., ) we built our network on just the core community members obtained as described above. We identified network hubs (OTUs that are central, densely connected with other OTUs in the network) and module hubs (OTUs more densely connected with module's OTUs rather than other OTUs in the network) based on the ratio between within‐module ( Zi ) and between‐module connectivity ( Pi ) and as previously shown (Andrews, ; Olesen et al., ). We used heatmaps to visualize the connection between proportions of positive and negative intra‐ and inter‐kingdom links (i.e., connections between OTUs), and relative abundances in root‐to‐root connected OTUs, for each crop and depth level. All analyses and figures were generated in R (R Core Team, ) while minimal graphical adjustments to improve figures' visibility were performed in Inkscape (Inkscape Project, ).
Sequencing results After demultiplexing, we obtained a total of 14,923,238 forward and 8,204,925 reverse sequence reads for ITS, and 21,640,158 forward and 19,917,130 reverse sequence reads for 16S with Phred quality >19, respectively. On average, we generated 38,264 ± 20,136 forward and 21,038 ± 16,500 reverse sequence reads per sample for ITS and 55,917 ± 33,371 forward and 51,465 ± 30,657 reverse 16S sequence reads, respectively. After removing non‐fungal OTUs and contaminants, including filtering out OTUs in positive and negative control samples we were left with 5,123,276 ITS (2794 OTUs) and 17,373,582 16 S (13,855 OTUs) clean sequence reads. Microbial alpha diversity In the ITS dataset, Ascomycota were the most abundant phylum (72.9%), followed by Basidiomycota (10.0%) and the subphyla Mortierellomycotina (1.7%) and Glomeromycotina (1.7%), while in the 16S dataset, the most abundant class was Actinobacteria (28.9%), followed by Alphaproteobacteria (12.3%), Betaproteobacteria (5.7%) and Acidobacteria_Gp16 (5.4%). Archaea in the Thaumarchaeota (1.1%) and Crenarchaeota (<0.1%) phyla were also present but low in abundance. We found that soil fungal and prokaryotic OTU richness strongly decreases with increasing soil depth in all crops while root communities were less impacted (Figure ). The Shannon index increased or stayed the same for all crops. Factorial ANOVA (Table ) showed that niche, depth and their interaction were the main factors driving alpha diversity metrics across crops, and demonstrated that depth impacts microbial richness more strongly than Shannon diversity. In general, communities become less diverse (especially in the soil) and slightly more even (especially in the roots) with increasing depth for both fungi and prokaryotes. Microbial beta‐diversity Fungal and prokarayotic communities clustered mainly by niche (i.e., soil vs. root), depth and ultimately crop, as displayed in the PCoA ordination graph (Figure ). The same trends were detected by PERMANOVA (i.e., ‘adonis’, permutations [perm.] 9999), which showed significant differences ( p ≤ 0.0001) in community structure between roots and soil samples (i.e., niche factor) accounting about 11% and 26% of the variation for fungi (Figure ) and prokaryotes, respectively (Figure ). Depth was the second significant factor in terms of explaining variation affecting microbial communities (7% fungi and 10% prokaryotes) followed by crop (about 4% fungi and 2% prokaryotes). In addition, we detected significant dispersion ( p ≤ 0.0001) around centroids (i.e., ‘betadisper’ and ‘permutest’, perm. 9999) in niche, crop, and depth samples for fungal (Figure ) and prokaryotic (Figure ) communities. Fungal and prokaryotic root samples showed significantly higher average dispersion than soil samples (i.e., higher heterogeneity between samples), but soils showed a wider distribution implying there is greater variation between centroids in soil samples compared to roots. Interestingly, a significant dispersion effect was present between samples at different crops and depths, with deeper soils having a higher dispersion and narrower distribution. For an in‐depth understanding of the effects that crop species and soil depth had on the microbial communities, we analysed root and soil separately with canonical analysis of principal coordinates (CAP) (Figure ) fitted to environmental vectors. In this case, samples clustered mainly by depth (i.e., CAP1) both in fungal (Figure ) and prokaryotic (Figure ) communities, but tighter clusters were visible in the soil compared to the root communities. A separation by crops (i.e., CAP2) is also detectable in the CAP ordination, and more visible for fungi where poplar samples lie further apart than the other crops, compared to the prokaryotes. Indeed, depth showed the greatest significant effect ( p ≤ 0.0001) for both fungal and prokaryotic communities, followed by crop and the interaction between the two (Table ). In particular, the variance explained (i.e., adjusted R 2 ) by depth was higher in the soil (about 24% for fungi and 50% for prokaryotes) compared to the roots (about 13% for fungi and 16% for prokaryotes). The interaction factor (i.e., crop: depth) explained a low amount of variance in all datasets, ranging from about 3% of the soil prokaryotes to 5% of root fungi (Table ). We found non‐significant dispersion around centroids (variances) between crops in all communities, as shown in the box plots of Figure , representing the distance to centroids distribution for each sample. However, we found significant dispersion ( p ≤ 0.0001) around centroids between samples of different depths for root fungi, soil fungi and soil prokaryotes, whereas dispersion was not significant for crops and depth for soil prokaryotes (Figure ). Sample dispersion decreased with increasing depth in the roots (i.e., root communities became more similar to each other with increased depth), but stayed constant or increased in soils (i.e., communities were more different from each other in deeper soils). Fitted environmental vectors showed that soil microbial communities towards the surface correlated with higher total carbon (C%), nitrogen (N%), phosphorus (PO 4 3− ), potassium (K + ) and silt, whereas communities of deeper soils correlated with increased pH and sand content. Interestingly, PO 4 3− fit significantly into fungal ordinations, while calcium (Ca 2+ ) and magnesium (Mg 2+ ) only fit into the prokaryotic ordinations, with higher levels towards the soil surface. Chemistry data alone indicate that soil N and C%, as well as the amount of K + , ( p ≤ 0.05) decreased significantly with increasing depth. Soil micronutrients (i.e., Ca 2+ and Mg) accumulate at median soil depths. Soil texture changed with depth, with % sand increasing considerably in deeper soils. An inverse pattern was seen for silt, and was statistically significant ( p ≤ 0.05) in prairie and switchgrass but not in poplar. In addition, strong positive correlations were found between Ca 2+ and Mg 2+ contents and cation exchange capacity (CEC) values (Figure ). The adjusted R 2 form CAP analysis performed on individual groups of samples (Figure ) clearly showed the effect of depth on community structure was generally higher for soils than roots, particularly for prokaryotes. Depth affected poplar soil fungi the most and root fungi the least. On the other hand, the effect of crop was higher in root than in soil communities and generally higher close to the surface with respect to deeper soils in the fungal communities (Figure ). For example, the highest effect of crop was detected for fungal roots communities at 10–25 cm. Neutral models, core taxa and microbial networks Neutral processes could help drive community assembly and maintenance. To assess the importance of neutral and non‐neutral processes, for example, microbial interactions or dispersal, we fit our data into a neutral assembly model (Figure , Figure ). We found the proportion of neutral, above, and below model prediction OTUs were similar across depths and in the different crops (Figure ). However, when just the core fungal and prokaryotic OTUs (defined here as the minimum OTU set that preserve the same community structure) (Shade and Stopnisek, ) were selected separately, some interesting trends were found. In the fungal communities, neutral OTUs (i.e., OTUs driven by drift) were more abundant in deeper soils (Figure ). In surface soils, OTUs above (i.e., OTUs selected or maintained by the host) or below (i.e., these are OTUs selected against by the host, or dispersal limited) the model predictions were more abundant (Figure ), particularly in poplar and switchgrass. To detect if the proportions of core OTUs classified by the neutral models were grouped according to crop or depth, we performed a PCA and significant differences between groups tested with PERMANOVA. The proportion of neutral, above and below prediction fungal OTUs statistically significantly separate by depth, which explained about 53% of variation in data. In the prokaryotic communities, we can clearly see a higher number of OTUs below the model prediction in deeper soils and a lower number of neutral OTUs (Figure ), especially in poplar and switchgrass. In general, microbial patterns in prairie systems were less distinguished, perhaps due to the diverse nature of prairies in terms of plant species present and their associated microbiomes. Regarding the neutral models goodness of fit ( r 2 ), the models based on the prokaryotic communities showed on average a higher fit compared to the fungal ones (Figure ) implying a higher importance of neutral processes in structuring these communities. Neutral fit was also generally lower in deeper soil samples compared to the surface in both communities. In addition, the migration rate ( m ) was on average higher in soil samples closer to the surface and lower in deeper soils, for both fungi and prokaryotes. Low m values suggest higher influence of dispersal limitation in community assembly (Figure ). Again, we used PCA and PERMANOVA to detect significant differences in r 2 and m rate between crops or depths. We found that only in the prokaryotic communities, r 2 and m significantly separated by depth, explaining about 66% of variation in the data, indicated that neutral processes have greater consequences for community assembly in deeper samples compared to more shallow ones. Since the core taxa appear to follow specific trends or relationships across depths (i.e., 0, 25, 50 and 100 cm) and to particular plant hosts, we used these taxa to explore covariance networks (Figure ) to identify potential interactions between the members of the communities. Microbial networks showed quantitative and qualitative shifts in diversity across soil depth and crop species. The number of Ascomycota fungal OTUs decreased with depth while bacterial, Actinobacteriota and Proteobacteria increased with depth. This was most pronounced in poplar and prairie systems, but not in switchgrass, where samples at 25 and 50 cm depth were the most diverse (Figure ). Bacterial OTUs within the Actinobacteriota, Proteobacteria, Chloroflexi and fungi in Ascomycota and Chytridiomycota were defined as network hubs (Table ). Interestingly, only a single fungal hub was present in poplar, and a few in switchgrass—which was comprised exclusively by bacteria, as reported by the number within the bubbles in Figure . Positive and negative intra‐ and inter‐kingdom links showed that fungi–fungi links decreased with increasing depth in all crops, while bacteria–bacteria links increased but stayed more or less the same in switchgrass (Figure ). Fungi–bacteria links decreased with depth in poplar but not in prairie and switchgrass. Regarding network complexity, several network properties increased with depth until 50 cm, and then decreased (Table ). When we look at the abundance of positive and negative intra‐ and inter‐kingdom root‐to‐root links (i.e., the higher the abundance the more the links are between root OTUs), we discovered that root‐to‐root and fungi–fungi links decrease with increased depth in all crops (in deeper soil there are more soil‐to‐root links compared to the surface), except for switchgrass were differential patterns were not very clear (Figure ). Root‐to‐root bacteria–bacteria links increase with increased depth (in deeper soil there are more root‐to‐root links compared to the surface). Positive and negative root‐to‐root fungi–bacteria links decrease in Poplar, while seems to increase or not having a defined trend in prairie and switchgrass (Figure ). Interestingly, the highest positive fungi–fungi root‐to‐root abundance was detected for prairie at 0–10 cm, while the highest bacteria–bacteria abundance for poplar at 50–100 cm. At phylum level, there was an increase of root‐to‐root links between OTUs within Proteobacteria and Actinobacteriota, and a decrease within Ascomycota, for all crops (Figures , ). Five network properties were able to statistically discriminate ( p ≤ 0.05) between the networks across depth but not across crops (Figure ). Modularity and the number of module hubs were higher in deeper soils. Average module size and average degree were correlated one another and together with negative links higher in soils at 25–50 cm depth (Figure ).
After demultiplexing, we obtained a total of 14,923,238 forward and 8,204,925 reverse sequence reads for ITS, and 21,640,158 forward and 19,917,130 reverse sequence reads for 16S with Phred quality >19, respectively. On average, we generated 38,264 ± 20,136 forward and 21,038 ± 16,500 reverse sequence reads per sample for ITS and 55,917 ± 33,371 forward and 51,465 ± 30,657 reverse 16S sequence reads, respectively. After removing non‐fungal OTUs and contaminants, including filtering out OTUs in positive and negative control samples we were left with 5,123,276 ITS (2794 OTUs) and 17,373,582 16 S (13,855 OTUs) clean sequence reads.
In the ITS dataset, Ascomycota were the most abundant phylum (72.9%), followed by Basidiomycota (10.0%) and the subphyla Mortierellomycotina (1.7%) and Glomeromycotina (1.7%), while in the 16S dataset, the most abundant class was Actinobacteria (28.9%), followed by Alphaproteobacteria (12.3%), Betaproteobacteria (5.7%) and Acidobacteria_Gp16 (5.4%). Archaea in the Thaumarchaeota (1.1%) and Crenarchaeota (<0.1%) phyla were also present but low in abundance. We found that soil fungal and prokaryotic OTU richness strongly decreases with increasing soil depth in all crops while root communities were less impacted (Figure ). The Shannon index increased or stayed the same for all crops. Factorial ANOVA (Table ) showed that niche, depth and their interaction were the main factors driving alpha diversity metrics across crops, and demonstrated that depth impacts microbial richness more strongly than Shannon diversity. In general, communities become less diverse (especially in the soil) and slightly more even (especially in the roots) with increasing depth for both fungi and prokaryotes.
Fungal and prokarayotic communities clustered mainly by niche (i.e., soil vs. root), depth and ultimately crop, as displayed in the PCoA ordination graph (Figure ). The same trends were detected by PERMANOVA (i.e., ‘adonis’, permutations [perm.] 9999), which showed significant differences ( p ≤ 0.0001) in community structure between roots and soil samples (i.e., niche factor) accounting about 11% and 26% of the variation for fungi (Figure ) and prokaryotes, respectively (Figure ). Depth was the second significant factor in terms of explaining variation affecting microbial communities (7% fungi and 10% prokaryotes) followed by crop (about 4% fungi and 2% prokaryotes). In addition, we detected significant dispersion ( p ≤ 0.0001) around centroids (i.e., ‘betadisper’ and ‘permutest’, perm. 9999) in niche, crop, and depth samples for fungal (Figure ) and prokaryotic (Figure ) communities. Fungal and prokaryotic root samples showed significantly higher average dispersion than soil samples (i.e., higher heterogeneity between samples), but soils showed a wider distribution implying there is greater variation between centroids in soil samples compared to roots. Interestingly, a significant dispersion effect was present between samples at different crops and depths, with deeper soils having a higher dispersion and narrower distribution. For an in‐depth understanding of the effects that crop species and soil depth had on the microbial communities, we analysed root and soil separately with canonical analysis of principal coordinates (CAP) (Figure ) fitted to environmental vectors. In this case, samples clustered mainly by depth (i.e., CAP1) both in fungal (Figure ) and prokaryotic (Figure ) communities, but tighter clusters were visible in the soil compared to the root communities. A separation by crops (i.e., CAP2) is also detectable in the CAP ordination, and more visible for fungi where poplar samples lie further apart than the other crops, compared to the prokaryotes. Indeed, depth showed the greatest significant effect ( p ≤ 0.0001) for both fungal and prokaryotic communities, followed by crop and the interaction between the two (Table ). In particular, the variance explained (i.e., adjusted R 2 ) by depth was higher in the soil (about 24% for fungi and 50% for prokaryotes) compared to the roots (about 13% for fungi and 16% for prokaryotes). The interaction factor (i.e., crop: depth) explained a low amount of variance in all datasets, ranging from about 3% of the soil prokaryotes to 5% of root fungi (Table ). We found non‐significant dispersion around centroids (variances) between crops in all communities, as shown in the box plots of Figure , representing the distance to centroids distribution for each sample. However, we found significant dispersion ( p ≤ 0.0001) around centroids between samples of different depths for root fungi, soil fungi and soil prokaryotes, whereas dispersion was not significant for crops and depth for soil prokaryotes (Figure ). Sample dispersion decreased with increasing depth in the roots (i.e., root communities became more similar to each other with increased depth), but stayed constant or increased in soils (i.e., communities were more different from each other in deeper soils). Fitted environmental vectors showed that soil microbial communities towards the surface correlated with higher total carbon (C%), nitrogen (N%), phosphorus (PO 4 3− ), potassium (K + ) and silt, whereas communities of deeper soils correlated with increased pH and sand content. Interestingly, PO 4 3− fit significantly into fungal ordinations, while calcium (Ca 2+ ) and magnesium (Mg 2+ ) only fit into the prokaryotic ordinations, with higher levels towards the soil surface. Chemistry data alone indicate that soil N and C%, as well as the amount of K + , ( p ≤ 0.05) decreased significantly with increasing depth. Soil micronutrients (i.e., Ca 2+ and Mg) accumulate at median soil depths. Soil texture changed with depth, with % sand increasing considerably in deeper soils. An inverse pattern was seen for silt, and was statistically significant ( p ≤ 0.05) in prairie and switchgrass but not in poplar. In addition, strong positive correlations were found between Ca 2+ and Mg 2+ contents and cation exchange capacity (CEC) values (Figure ). The adjusted R 2 form CAP analysis performed on individual groups of samples (Figure ) clearly showed the effect of depth on community structure was generally higher for soils than roots, particularly for prokaryotes. Depth affected poplar soil fungi the most and root fungi the least. On the other hand, the effect of crop was higher in root than in soil communities and generally higher close to the surface with respect to deeper soils in the fungal communities (Figure ). For example, the highest effect of crop was detected for fungal roots communities at 10–25 cm.
Neutral processes could help drive community assembly and maintenance. To assess the importance of neutral and non‐neutral processes, for example, microbial interactions or dispersal, we fit our data into a neutral assembly model (Figure , Figure ). We found the proportion of neutral, above, and below model prediction OTUs were similar across depths and in the different crops (Figure ). However, when just the core fungal and prokaryotic OTUs (defined here as the minimum OTU set that preserve the same community structure) (Shade and Stopnisek, ) were selected separately, some interesting trends were found. In the fungal communities, neutral OTUs (i.e., OTUs driven by drift) were more abundant in deeper soils (Figure ). In surface soils, OTUs above (i.e., OTUs selected or maintained by the host) or below (i.e., these are OTUs selected against by the host, or dispersal limited) the model predictions were more abundant (Figure ), particularly in poplar and switchgrass. To detect if the proportions of core OTUs classified by the neutral models were grouped according to crop or depth, we performed a PCA and significant differences between groups tested with PERMANOVA. The proportion of neutral, above and below prediction fungal OTUs statistically significantly separate by depth, which explained about 53% of variation in data. In the prokaryotic communities, we can clearly see a higher number of OTUs below the model prediction in deeper soils and a lower number of neutral OTUs (Figure ), especially in poplar and switchgrass. In general, microbial patterns in prairie systems were less distinguished, perhaps due to the diverse nature of prairies in terms of plant species present and their associated microbiomes. Regarding the neutral models goodness of fit ( r 2 ), the models based on the prokaryotic communities showed on average a higher fit compared to the fungal ones (Figure ) implying a higher importance of neutral processes in structuring these communities. Neutral fit was also generally lower in deeper soil samples compared to the surface in both communities. In addition, the migration rate ( m ) was on average higher in soil samples closer to the surface and lower in deeper soils, for both fungi and prokaryotes. Low m values suggest higher influence of dispersal limitation in community assembly (Figure ). Again, we used PCA and PERMANOVA to detect significant differences in r 2 and m rate between crops or depths. We found that only in the prokaryotic communities, r 2 and m significantly separated by depth, explaining about 66% of variation in the data, indicated that neutral processes have greater consequences for community assembly in deeper samples compared to more shallow ones. Since the core taxa appear to follow specific trends or relationships across depths (i.e., 0, 25, 50 and 100 cm) and to particular plant hosts, we used these taxa to explore covariance networks (Figure ) to identify potential interactions between the members of the communities. Microbial networks showed quantitative and qualitative shifts in diversity across soil depth and crop species. The number of Ascomycota fungal OTUs decreased with depth while bacterial, Actinobacteriota and Proteobacteria increased with depth. This was most pronounced in poplar and prairie systems, but not in switchgrass, where samples at 25 and 50 cm depth were the most diverse (Figure ). Bacterial OTUs within the Actinobacteriota, Proteobacteria, Chloroflexi and fungi in Ascomycota and Chytridiomycota were defined as network hubs (Table ). Interestingly, only a single fungal hub was present in poplar, and a few in switchgrass—which was comprised exclusively by bacteria, as reported by the number within the bubbles in Figure . Positive and negative intra‐ and inter‐kingdom links showed that fungi–fungi links decreased with increasing depth in all crops, while bacteria–bacteria links increased but stayed more or less the same in switchgrass (Figure ). Fungi–bacteria links decreased with depth in poplar but not in prairie and switchgrass. Regarding network complexity, several network properties increased with depth until 50 cm, and then decreased (Table ). When we look at the abundance of positive and negative intra‐ and inter‐kingdom root‐to‐root links (i.e., the higher the abundance the more the links are between root OTUs), we discovered that root‐to‐root and fungi–fungi links decrease with increased depth in all crops (in deeper soil there are more soil‐to‐root links compared to the surface), except for switchgrass were differential patterns were not very clear (Figure ). Root‐to‐root bacteria–bacteria links increase with increased depth (in deeper soil there are more root‐to‐root links compared to the surface). Positive and negative root‐to‐root fungi–bacteria links decrease in Poplar, while seems to increase or not having a defined trend in prairie and switchgrass (Figure ). Interestingly, the highest positive fungi–fungi root‐to‐root abundance was detected for prairie at 0–10 cm, while the highest bacteria–bacteria abundance for poplar at 50–100 cm. At phylum level, there was an increase of root‐to‐root links between OTUs within Proteobacteria and Actinobacteriota, and a decrease within Ascomycota, for all crops (Figures , ). Five network properties were able to statistically discriminate ( p ≤ 0.05) between the networks across depth but not across crops (Figure ). Modularity and the number of module hubs were higher in deeper soils. Average module size and average degree were correlated one another and together with negative links higher in soils at 25–50 cm depth (Figure ).
In this study, we assessed the major forces that regulate the dynamics of soil microbial communities in plant–soil environments along a vertical niche belowground. Leveraging long‐term field‐scale replicated experiments, we were able to analyse several aspects of these plant‐associated microbiomes along a 1‐m soil‐depth gradient for poplar, prairie, and switchgrass biofuel crops in replicated plots. We demonstrate a significant vertical niche in soil and root compartments, and consider the drivers and consequences of this vertical diversity gradient in roots and bulk soils. Differences between root and soil microbiomes As documented in other studies, we report that microbial communities in roots are less diverse and quite distinct from those in bulk soil (Goldmann et al., ; López‐Angulo et al., ). We also found microbial communities are variably distributed at a fine scale. Yet, alpha diversity in roots and soils follow different trends along the sampled depth gradient. Soil carbon, together with pH and nitrogen, appear to be the most important factors explaining microbial biomass and functional diversity in soil ecosystems (Fierer & Jackson, ; Fierer, ; Bastida et al., ). As previously suggested (Celestina et al., ; Mundra et al., ; Yokota et al., ), greater carbon stocks and nutrient content of surface soils may account for significantly greater microbial diversity in surface soils, as we found across all bioenergy crops. Aboveground litter contributes diverse organic matter to mineral soils, but these inputs decrease significantly with increasing soil depth where carbon from roots becomes increasingly important in driving heterotrophic soil food webs. Greater nutrient, oxygen and water availability, as well as higher microclimatic variation may also contribute to more ecological niches in surface soils compared to deeper soil, thus, enabling the support of greater microbial diversity (Mundra et al., ). The belowground vertical niche While drastic differentiation within bacterial and fungal communities are known to exist between organic and mineral soil horizons (Peršoh et al., ), our study focused on soil below the organic horizon and also found significant differentiation. Previously, ectomycorrhizal fungi were shown to differentiate along a vertical niche (Dickie et al., ). Although poplar was the only ectomycorrhizal host sampled here, we expected that other microbial guilds would follow similar patterns of differentiation, and this is what we found. Decreasing microbial species richness with increasing soil depth is well documented in soil microbial ecology studies across different ecosystems (Zhang et al., ; Jiao et al., ; Hao et al., ; Frey et al., ). It has been also shown that variable gradients of carbon, nitrogen, pH and oxygen usually correlate with declines in microbial biomass and diversity (Fierer et al., ; Schlatter et al., ; Ren et al., ). For instance, the abundance and diversity of bacterial communities in a permafrost zone were both found to decrease to a 70‐cm depth, and abiotic factors, such as soil temperature, carbon, nitrogen, phosphorus, moisture and clay content, respectively, were the most significant factors driving bacterial community diversity (Ren et al., ). Yet, these factors often co‐vary with depth, making it challenging to disentangle the main drivers without more controlled studies. Core microbiome Taxa that are consistent across samples and datasets constitute the core microbiome, and can be defined by specific abundance‐occupancy distributions (Shade and Stopnisek, ). Core microbiome members are hypothesized to be functionally significant to their niche. To better understand the ecological and potentially functional relationships shared between soil microbes and plant rhizospheres, we identified core microbiome members across niches and depths. We fit these microbial distributions into a neural model to predict the importance of selection and drift in organizing these communities. Together, our data showed the fungal communities of sampled bioenergy crops in the surface soil layers (e.g., 0–25 cm) have a higher number of core OTUs that are above or below the neutral model predictions, while neutral OTUs are higher in the deep layers (50–100 cm). This is in contrast with what was found by Powell et al. who investigated the role of deterministic and stochastic processes in vertical soil horizons at 183 sites across Scotland, and measured high stochasticity in fungal communities on the surface soils (Powell et al., ). However, Powell et al. analysed natural sites to a depth of 75 cm, rather than agricultural fields, which may explain the differences in the results. In our analysis, most of the fungi on the soil surface undergo selective processes, mediated by the host, or by the microbes themselves, and finally occupy and maintain a specific occupied niche—coexistence through niche differentiation. In contrast, in deeper soils, we find more fungi that follow a model of passive dispersal and ecological drift. This phenomenon causes species abundances to randomly vary, reducing diversity within communities and increasing differences between communities. In harsh environments, such as deeper soils where resources are limited, an equalizing mechanism that reduces differences in relative fitness among species has also been proposed to maintain species coexistence (Kim & Ohr, ). Interestingly, we saw a different pattern in the prokaryotic communities. A higher number of OTUs below the model prediction was observed in the deeper soil layers while the number of neutral OTUs decreased with increasing soil depth. For instance, neutral, above‐, and below‐prediction core OTUs proportions clustered significantly by depth in fungal communities, but not in the prokaryotic communities. Depth was a statistically significant factor that influenced model fit and migration rate with decreased depth for the core prokaryotic communities. We speculate that the unicellular nature of prokaryotic organisms, including traits of motility, and dispersal via soil hydrology, differentiates the macroecology of bacteria from that of filamentous fungi. Indeed, It has been shown that the soil water content correlates with the richness of soil microbial communities (Jonas et al., ; Aung et al., ) and that motility impacts root colonization by bacteria (Knights et al., ). It is also important to consider that moisture content and temperature are generally more stable in deeper soils compared to surface soils. Microbial networks Microbial networks are a way to statistically assess the strength of interactions and linkages between taxa within a dataset. We assessed microbial networks based on identified core microbiomes and found that deeper soils consist of more dense networks that have higher connectivity. A similar approach was recently used in grassland ecosystems (Upton et al., ), who found that fungal and bacterial networks of native plants were more connected at lower soil depths, even if there were fewer nodes. Higher connectivity in deeper soil may be due to the relative importance that root C inputs have on microbial activity at deeper rooting depths. In addition, since deeper soil depths harboured less diverse fungal communities, we expected to see larger networks as more OTUs were shared between samples across niches and depths. In all crops, we detected the general trend of decreasing fungal and increasing prokaryotic core OTUs with increasing depth in all crops. Our results correlate with what obtained by Yao et al. used phospholipid fatty acids (PLFA) analysis to investigate factors influencing soil microbial communities in temperate grasslands of northern China (Yao et al., ). They also found that fungi were more abundant in the surface while prokaryotes in deeper soils, highlighting another fundamental difference between patterns of fungal and bacterial community diversity. Results from our study show that fungi–fungi links decreased with increasing depth in all crops while bacteria–bacteria links increased with depth, or remained fairly constant in the case of switchgrass. The diversity of core fungi in the roots decreased with depth, while that of bacteria increased. Microbial network modularity, number of hubs, average module size, average degree and negative links were statistically significantly separated by depth, with more modules (and more module hubs) in deeper soil implying they may have greater resistance to environmental changes compared to communities in upper soil layers. These results contrast with those found by Mundra et al. ( ), where upper mineral soil harboured a higher modularity and also more inter‐kingdom links compared to the above organic layers or deeper mineral layers. Nonetheless, the differential partitioning of core fungal and bacterial networks with soil depth across all three bioenergy species highlights the important contribution of plant communities on deep soil microbial communities, whose functions are critical to the sustainability of these bioenergy cropping systems.
As documented in other studies, we report that microbial communities in roots are less diverse and quite distinct from those in bulk soil (Goldmann et al., ; López‐Angulo et al., ). We also found microbial communities are variably distributed at a fine scale. Yet, alpha diversity in roots and soils follow different trends along the sampled depth gradient. Soil carbon, together with pH and nitrogen, appear to be the most important factors explaining microbial biomass and functional diversity in soil ecosystems (Fierer & Jackson, ; Fierer, ; Bastida et al., ). As previously suggested (Celestina et al., ; Mundra et al., ; Yokota et al., ), greater carbon stocks and nutrient content of surface soils may account for significantly greater microbial diversity in surface soils, as we found across all bioenergy crops. Aboveground litter contributes diverse organic matter to mineral soils, but these inputs decrease significantly with increasing soil depth where carbon from roots becomes increasingly important in driving heterotrophic soil food webs. Greater nutrient, oxygen and water availability, as well as higher microclimatic variation may also contribute to more ecological niches in surface soils compared to deeper soil, thus, enabling the support of greater microbial diversity (Mundra et al., ).
While drastic differentiation within bacterial and fungal communities are known to exist between organic and mineral soil horizons (Peršoh et al., ), our study focused on soil below the organic horizon and also found significant differentiation. Previously, ectomycorrhizal fungi were shown to differentiate along a vertical niche (Dickie et al., ). Although poplar was the only ectomycorrhizal host sampled here, we expected that other microbial guilds would follow similar patterns of differentiation, and this is what we found. Decreasing microbial species richness with increasing soil depth is well documented in soil microbial ecology studies across different ecosystems (Zhang et al., ; Jiao et al., ; Hao et al., ; Frey et al., ). It has been also shown that variable gradients of carbon, nitrogen, pH and oxygen usually correlate with declines in microbial biomass and diversity (Fierer et al., ; Schlatter et al., ; Ren et al., ). For instance, the abundance and diversity of bacterial communities in a permafrost zone were both found to decrease to a 70‐cm depth, and abiotic factors, such as soil temperature, carbon, nitrogen, phosphorus, moisture and clay content, respectively, were the most significant factors driving bacterial community diversity (Ren et al., ). Yet, these factors often co‐vary with depth, making it challenging to disentangle the main drivers without more controlled studies.
Taxa that are consistent across samples and datasets constitute the core microbiome, and can be defined by specific abundance‐occupancy distributions (Shade and Stopnisek, ). Core microbiome members are hypothesized to be functionally significant to their niche. To better understand the ecological and potentially functional relationships shared between soil microbes and plant rhizospheres, we identified core microbiome members across niches and depths. We fit these microbial distributions into a neural model to predict the importance of selection and drift in organizing these communities. Together, our data showed the fungal communities of sampled bioenergy crops in the surface soil layers (e.g., 0–25 cm) have a higher number of core OTUs that are above or below the neutral model predictions, while neutral OTUs are higher in the deep layers (50–100 cm). This is in contrast with what was found by Powell et al. who investigated the role of deterministic and stochastic processes in vertical soil horizons at 183 sites across Scotland, and measured high stochasticity in fungal communities on the surface soils (Powell et al., ). However, Powell et al. analysed natural sites to a depth of 75 cm, rather than agricultural fields, which may explain the differences in the results. In our analysis, most of the fungi on the soil surface undergo selective processes, mediated by the host, or by the microbes themselves, and finally occupy and maintain a specific occupied niche—coexistence through niche differentiation. In contrast, in deeper soils, we find more fungi that follow a model of passive dispersal and ecological drift. This phenomenon causes species abundances to randomly vary, reducing diversity within communities and increasing differences between communities. In harsh environments, such as deeper soils where resources are limited, an equalizing mechanism that reduces differences in relative fitness among species has also been proposed to maintain species coexistence (Kim & Ohr, ). Interestingly, we saw a different pattern in the prokaryotic communities. A higher number of OTUs below the model prediction was observed in the deeper soil layers while the number of neutral OTUs decreased with increasing soil depth. For instance, neutral, above‐, and below‐prediction core OTUs proportions clustered significantly by depth in fungal communities, but not in the prokaryotic communities. Depth was a statistically significant factor that influenced model fit and migration rate with decreased depth for the core prokaryotic communities. We speculate that the unicellular nature of prokaryotic organisms, including traits of motility, and dispersal via soil hydrology, differentiates the macroecology of bacteria from that of filamentous fungi. Indeed, It has been shown that the soil water content correlates with the richness of soil microbial communities (Jonas et al., ; Aung et al., ) and that motility impacts root colonization by bacteria (Knights et al., ). It is also important to consider that moisture content and temperature are generally more stable in deeper soils compared to surface soils.
Microbial networks are a way to statistically assess the strength of interactions and linkages between taxa within a dataset. We assessed microbial networks based on identified core microbiomes and found that deeper soils consist of more dense networks that have higher connectivity. A similar approach was recently used in grassland ecosystems (Upton et al., ), who found that fungal and bacterial networks of native plants were more connected at lower soil depths, even if there were fewer nodes. Higher connectivity in deeper soil may be due to the relative importance that root C inputs have on microbial activity at deeper rooting depths. In addition, since deeper soil depths harboured less diverse fungal communities, we expected to see larger networks as more OTUs were shared between samples across niches and depths. In all crops, we detected the general trend of decreasing fungal and increasing prokaryotic core OTUs with increasing depth in all crops. Our results correlate with what obtained by Yao et al. used phospholipid fatty acids (PLFA) analysis to investigate factors influencing soil microbial communities in temperate grasslands of northern China (Yao et al., ). They also found that fungi were more abundant in the surface while prokaryotes in deeper soils, highlighting another fundamental difference between patterns of fungal and bacterial community diversity. Results from our study show that fungi–fungi links decreased with increasing depth in all crops while bacteria–bacteria links increased with depth, or remained fairly constant in the case of switchgrass. The diversity of core fungi in the roots decreased with depth, while that of bacteria increased. Microbial network modularity, number of hubs, average module size, average degree and negative links were statistically significantly separated by depth, with more modules (and more module hubs) in deeper soil implying they may have greater resistance to environmental changes compared to communities in upper soil layers. These results contrast with those found by Mundra et al. ( ), where upper mineral soil harboured a higher modularity and also more inter‐kingdom links compared to the above organic layers or deeper mineral layers. Nonetheless, the differential partitioning of core fungal and bacterial networks with soil depth across all three bioenergy species highlights the important contribution of plant communities on deep soil microbial communities, whose functions are critical to the sustainability of these bioenergy cropping systems.
Microbial communities are a key component of any agricultural system and their role in biogeochemical cycling is well known. However, the extent that these communities vary in diversity and structure with soil depth, and their relationships with the host, are less studied. In this study, we found that soil depth has a major impact on soil and root microbiomes, with soil microbial diversity correlating with carbon availability and decreasing with soil depth. Communities in the deeper soil were less diverse, but were also less heterogeneous in the roots and more heterogeneous in the soils. In deeper soils, roots appear to be a major factor generating niche breadth for microbial life to persist and function, further impacting soil structure and functioning. Stochastic process described the prokaryotic communities more accurately than they did fungal communities, and there was a significantly different model fit for fungi and bacteria across this vertical soil niche. Overall, neutral fungal core taxa were higher in deeper soils, which were dominated by dispersal‐limited prokaryotes, underlying the biological, ecological and morphological differences present in these Kingdoms. Co‐occurrence networks were more connected and modular in deeper soils, indicating a higher rate of interdependence in more confined oligotrophic soil environments. Taken together, we provided a novel understanding of soil microbiomes and their interactions in connection to different bioenergy hosts and cropping systems. This knowledge is key to leveraging plant microbiomes for the many functions they provide in the environment to support cleaner, and more sustainable agricultural and energy economies.
Gian Maria Niccolò Benucci: Methodology, Illumina library preparation, software, bioinformatics, data analysis, validation, data curation, database, supervision, writing ‐ original draft preparation, writing ‐ reviewing and editing. Pedro Beschoren da Costa: Methodology, DNA extraction, data analysis, writing‐ reviewing and editing. Xinxin Wang: Methodology, DNA extraction. Gregory Bonito: Conceptualization, validation, supervision, project administration, funding acquisition, investigation, writing ‐ original draft preparation, writing ‐ reviewing and editing.
Authors declare no competing interests in relation to the work described.
Data S1: Supporting information Click here for additional data file.
|
The carbon‐quality temperature hypothesis: Fact or artefact?
|
0de65a8d-fc40-4769-afd8-99adb2aec473
|
10099867
|
Microbiology[mh]
|
INTRODUCTION Soils store twice as much carbon in the upper 1 m as the atmosphere (Batjes, ). These large carbon stocks in soil organic matter (SOM) could be lost through enhanced decomposition under a warming climate, leading to a positive soil carbon‐climate feedback (García‐Palacios et al., ; Kirschbaum, ). The extent of the enhancement of SOM decomposition in response to increasing temperature depends on the temperature sensitivity of decomposition. However, our understanding of temperature sensitivity (Sierra, ), for example, the temperature coefficient Q 10 , of SOM decomposition, is still incomplete (Davidson et al., ; Fang et al., ; Giardina & Ryan, ; Knorr et al., ; Melillo et al., ). This key uncertainty limits our ability to predict how SOM decomposition may respond to climate change. Much of our understanding of the temperature response of SOM decomposition is based on studies of short‐term measurements of soil respiration rates that are dominated by the response of readily decomposable labile carbon. The bulk of the carbon stocks in the soil, however, consists of resistant carbon that decomposes more slowly, and these rates may respond to temperature differently from those for labile carbon (Bosatta & Ågren, ; Davidson et al., ; Davidson & Janssens, ; Hartley et al., ; Knorr et al., ). It is, therefore, uncertain whether insights from the responses of the turn‐over of labile carbon can be applied to the ultimately more important turn‐over rates of resistant soil carbon. This uncertainty hinders our ability to predict the overall impacts of climate warming on SOM decomposition rates (Conant et al., ). An important obstacle to understanding the temperature sensitivity of SOM decomposition lies in the confounding of responses between the two principal stabilisation mechanisms of organic carbon in the soil, that is, chemical recalcitrance and physical protection by the matrix of soil minerals (Dungait et al., ). To avoid semantic confusion, hereafter the word “resistance” or “resistant” refers to the slow decomposition of SOM irrespective of its causes, that is, either controlled by physical protection or its chemical properties. And the word “recalcitrance” or “recalcitrant” only refers to the chemical properties of SOM. Traditionally, the molecular structure of the SOM molecules, or chemical recalcitrance, had been thought to be the primary factor to determine the decomposition rates in soils (Melillo et al., ; Sollins et al., ; von Lützow & Kögel‐Knabner, ). For example, lignin in litter or the soil is regarded as a recalcitrant compound due to its complex chemical structure and thermodynamically stable molecular configuration. Its breakdown is, therefore, expected to require a higher activation energy than that of simple molecules like glucose (Davidson & Janssens, ). However, newer work has indicated that the resistance to degradation of SOM is mostly controlled by the interaction between soil minerals and organic carbon molecules (Conant et al., ; Dungait et al., ; Vogel et al., ) because of the formation of organo‐mineral complexes that can protect organic carbon from microbial decomposition (Baldock & Skjemstad, ; Eusterhues et al., ; Kleber et al., ). The physical protection is, therefore, now generally considered to be more important for protecting soil carbon than chemical recalcitrance (Dungait et al., ; Kirschbaum et al., ; Marschner et al., ; Mikutta et al., ), or, as Schmidt et al. ( ) described it that “the persistence of soil organic carbon is primarily not a molecular property, but an ecosystem property”. The formation of organo‐mineral complexes in soils can render chemically labile molecules resistant to decomposition. The resistance of SOM decomposition, therefore, involves not only the chemical recalcitrance of carbon compounds but also their accessibility (Conant et al., ; Dungait et al., ) to microbes and their extracellular enzymes (Allison et al., ). The physical protection of organic carbon, or their encapsulation in particle aggregates in soils, is also likely to respond less to temperature than unprotected carbon (Hartley et al., ; Moinet et al., ). Nonetheless, ongoing research continues to investigate whether there are any general patterns between the biochemical recalcitrance of organic matter and its temperature sensitivity (Alves et al., ; Briones et al., ; Li et al., ; Liu et al., ; Moinet & Millard, ; Park et al., ; Reynolds et al., ). The role of biochemical recalcitrance attains particular importance under conditions where physical protection is necessarily less important, such as in organic soils, in the litter layer or in soils with limited mineral protective capacity like sandy soils. Biochemically recalcitrant SOM decomposes more slowly because of its complex and thermodynamically stable molecular structure that may also make its decomposition more sensitive to temperature (as indicated by a higher Q 10 ) than that of labile compounds (Figure ). This effect on the decomposition of recalcitrant compounds can be expressed through a higher activation energy than that of labile compounds (Davidson & Janssens, ), which would suggest that chemically recalcitrance compounds should respond more strongly to temperature than more labile compounds. This notion has been theoretically formalised as the carbon‐quality temperature (CQT) hypothesis (Bosatta & Ågren, ). If the CQT hypothesis is correct, global warming would then lead to larger carbon losses from SOM decomposition and enhanced positive climate feedback through enhanced decomposition of any large stores of biochemically recalcitrant carbon. Experimental tests of the CQT hypothesis have generally tried to assess whether there is an inverse relationship between a temperature sensitivity index, Q 10 (Fierer et al., , ; Li et al., ; Mikan et al., ; Xu et al., ) or activation energy, E a (Craine et al., ) and a carbon quality index (Figure ). Determining carbon quality, however, is problematic as studies have not usually been able to chemically characterise diverse mixtures of substrates in the soil or incorporate the effects of physical protection (Dungait et al., ) within the soil matrix. Carbon quality, therefore, has often been defined functionally as the respiration rate at a common reference temperature. Different studies have used either exponential or Arrhenius‐like functions to derive temperature‐sensitivity and carbon quality indices by using measurements of soil respiration rate ( R s ) at different temperatures ( T ). For example, in the exponential function, R s = R 0 e bT or its log‐transformed version ln( R s ) = ln( R 0 ) + bT , the respiration rate at 0°C, that is, ln( R 0 ), is often defined as the carbon quality index (e.g., Fierer et al., , ; Li et al., ; Mikan et al., ; Xu et al., ), and the parameter b is used to determine the temperature sensitivity index Q 10 ( Q 10 = e 10 b ). Using this approach with data from soil incubation experiments, different researchers (Fierer et al., , ; Li et al., ; Mikan et al., ; Xu et al., ) have reported negative correlations between Q 10 and ln( R 0 ) to support the CQT hypothesis (Figure ). The validity of the negative correlation between Q 10 and ln( R 0 ) has, however, been challenged on statistical grounds, since a correlation between Q 10 and ln( R 0 ) could simply arise from random measurement errors (Reichstein et al., ). As b is the slope and ln( R 0 ) is the intercept of the regression of ln( R s ) against temperature, any random variation in individual data points would have inverse effects on fitted ln( R 0 ) and b values (Reichstein et al., ). To counter that, others have argued that this error‐originated compensation between slope and intercept could be overcome if many independent samples were used (Fierer et al., ) where the randomness of slopes and intercepts might mitigate against any consistent pattern between them. In particular, data from different geographical locations were collected to justify CQT by demonstrating a common negative correlation between Q 10 (Fierer et al., ) or activation energy (Craine et al., ) and the carbon quality index that was defined as the respiration rate at a reference temperature. Despite concern about the analysis raised previously (Reichstein et al., ), ongoing studies continue to frequently apply reasoning based on the CQT hypothesis to interpret temperature responses of soil carbon decomposition (Ghosh et al., ; Li et al., ; Liu et al., ; Yang et al., ; Yanni et al., ). It is thus warranted to reappraise the validity of experimental tests of the CQT hypothesis.
A CONCEPTUAL PARADOX OF CURRENT EXPERIMENTAL TESTS In addition to the statistical problem discussed by Reichstein et al. ( ), we found that the definition of carbon quality index, that is, the respiration rate at an arbitrarily chosen reference temperature, even results in a conceptual paradox, as we demonstrate below. If recalcitrant carbon has a lower decomposition rate at a reference temperature, as postulated by the CQT hypothesis (Figure ), it must inevitably mean that both labile and recalcitrant carbon must have the same decomposition rates at a cross‐over temperature because of the difference in slopes for the corresponding ln( R s )– T curves (Figure ). For example, at T 1 , ln( R s ) of a labile carbon compound (Figure , closed circle) is higher than that of a recalcitrant carbon compound (Figure , open circle). Since the recalcitrant compound has a higher Q 10 , and thus a steeper slope of its ln( R s )– T curve (Figure , dashed line), the two curves must cross at T 2 (squares, the cross‐over temperature). At an even higher temperature T 3 , the carbon compound that was defined as more recalcitrant at T 1 , would be defined as more labile at T 3 . The combination of the CQT, together with an arbitrary choice of reference temperature, thus results in the paradoxical conclusion that a given compound can be defined as either more labile or more recalcitrant just by changing the reference temperature. This means that simply changing the reference temperature from low to high to define carbon quality could shift the correlation between Q 10 and ln( R s ) from negative to positive.
REANALYSIS OF A GLOBAL DATA SET OF SOIL INCUBATION EXPERIMENTS Figure presents a purely hypothetical case. To determine whether this paradox is also evident in a realistic set of observations, we reanalysed a data set of temperature response measurements of soil respiration that had been compiled from 113 independent soil incubations across 60 locations globally, consisting of 77 incubation experiments in the United States (Fierer et al., ) and 36 incubations in China (Li et al., ). Experimental setups and sampling methods have been described in detail in the original papers. Briefly, soils were sampled from various ecosystem types at different latitudes globally and were incubated in the laboratory to determine the respiration rate between 10 and 30°C (Fierer et al., ) or between 4 and 28°C (Li et al., ). By assuming a constant Q 10 over the measured temperatures, an exponential function was applied to describe the temperature dependence of soil respiration as: R s = R 0 e bT ⇒ ln R s = ln R 0 + bT where R s is respiration rate, R 0 and b are fitted parameters. In the semilogarithmic plot of ln( R s ) versus T (°C), ln( R 0 ) is the logarithm of respiration rate at 0°C and b is the slope of the linear regression of ln( R s ) versus T . The parameter b further defines Q 10 , a temperature sensitivity index of soil respiration as Q 10 = e 10 b . Using the reported Q 10 and ln( R 0 ), we reconstructed the temperature response curve for each individual incubation and calculated respiration rates at temperatures ranging from 0 to 60°C, which is within the relevant range for biological reactions. Using the recalculated respiration rate, we further determined correlations between Q 10 and carbon quality, defined as the logarithm of respiration rate at temperatures chosen between 0 and 60°C at 1°C increments. Data were processed and plotted using MATLAB R2018a (The MathWorks, Inc.). If carbon quality is defined as the logarithm of soil respiration rate at 0°C, this set of observations results in a significant negative correlation between Q 10 and ln( R 0 ) (Figure ). This would be consistent with the CQT hypothesis. However, the choice of 0°C as the reference temperature is arbitrary, and if one, instead, chose the respiration rate at 43°C, ln( R 43 ), as the proxy for quality index, the correlation between the quality index and Q 10 would disappear (Figure ), and the result would then be inconsistent with the CQT hypothesis. More generally, the correlation between Q 10 and ln( R ) can shift from negative to positive simply by arbitrarily choosing different reference temperatures from 0 to 60°C (Figure ). This dependence of the correlation coefficient on a selected reference temperature clearly presents a conceptual problem for testing the CQT hypothesis within temperatures ranging from 0 to 60°C for soil respiration. The respiration rate would only represent a valid carbon quality index if the negative relationship between Q 10 and the carbon quality index remained irrespective of the chosen reference temperature, within the relevant temperature range. However, this requirement is not met (Figure ), since the observation of a negative correlation, as the central tenet of the CQT hypothesis, depends entirely on the arbitrary choice of a reference temperature for determining the carbon quality index. Tests of the CQT hypothesis, therefore, require alternative measures of carbon quality.
THE CHEMICAL DEFINITION OF CARBON QUALITY Carbon quality intrinsically refers to the degree of difficulty of a carbon compound being decomposed through a chemical reaction. This degree of difficulty of a reaction may be defined as the spontaneous reaction rate in an aqueous solution in the absence of catalysts such as enzymes (Wolfenden, ), namely the uncatalysed rate, k non . A small k non value means a slow reaction rate and thus a difficult reaction and a recalcitrant compound. For example, the hydrolysis of phosphate monoester dianions, like fructose‐1,6‐bisphosphate (a critical compound in carbohydrate catabolism in biological systems), is one of the slowest uncatalysed biological reactions (Lad et al., ). This reaction has a k non value of 2.0 × 10 −20 s −1 at 25°C, corresponding to a half‐life of 1.1 × 10 12 years (Lad et al., ). Because of the generally slow nature of uncatalysed reactions, it is a common practice to estimate k non at ambient temperatures (Wolfenden, ) by measuring reaction rates at a series of greatly elevated temperatures and then extrapolating reaction rates back to ambient temperature, for example, 25°C, using a linear Arrhenius plot (Figure ).
TESTING THE CQT HYPOTHESIS IN UNCATALYSED AND ENZYME‐CATALYSED REACTIONS To test whether a negative relationship, as the central tenet of the CQT hypothesis, exists between Q 10 or E a and k non as the carbon quality index, we collected data from a total of 56 uncatalysed and 21 corresponding enzyme‐catalysed reactions from the published literature. The activation energy ( E a ) of reactions was determined by the slope ( −E a / R ) of the temperature response curve in the Arrhenius plot (see an example in Figure ). In data sets where the slope of the temperature response curve was given as enthalpy of activation (− ΔH ‡ / R ) by fitting the Eyring equation instead of the Arrhenius function, we determined E a as E a = ΔH ‡ + RT (Chang & Thoman, ), where R is the universal gas constant. For uncatalysed reactions, both E a and the reaction rate at 25°C ( k 25 ) were directly collected from tables published in the literature. For enzyme‐catalysed reactions, E a was collected from tables, text or recalculated from graphs of published studies. To determine the correlation coefficients between Q 10 and k non at different temperatures, k non at temperatures from 0 to 200°C was calculated by using E a and the respective rates at 25°C. We further applied the Jackknife resampling technique (Martinez & Martinez, ) to calculate the correlation coefficients for both uncatalysed ( n = 56) and the corresponding enzyme‐catalysed reactions ( n = 21) by repeatedly omitting one observation from the original data set. Therefore, there were 56 and 21 estimates of the correlation coefficients at each temperature for uncatalysed and catalysed reactions, respectively. The mean correlation coefficient and the associated standard errors at each temperature were further determined based on the calculated correlation coefficients from Jackknife resampling (Martinez & Martinez, ). From this data collection of uncatalysed reactions, we found that the k non of different reactions at 25°C spanned a range from 10 −3 to 10 −20 s −1 corresponding to half‐lives from minutes to billions of years. Correspondingly, the activation energies of uncatalysed reactions ranged from 35 to 199 kJ mol −1 , with Q 10 , calculated between 20 and 30°C, ranging from 1.6 to 15. In this analysis of uncatalysed reactions, we also found a significant negative correlation between Q 10 and ln( k non ) at 25°C (Figure ), similar to the correlation between ln( R 0 ) and Q 10 described for soil respiration (Figure ). However, in contrast to the inconsistent correlation shown in Figure for soil respiration, we obtained a consistent negative correlation between observed rates and E a or Q 10 over the relevant biological temperature range, for example, 0 to 60°C, and also over a much wider temperature range of up to 200°C (Figure ). Even this correlation must eventually be lost at much higher temperatures (Figure ) according to the conceptual paradox shown in Figure . For both uncatalysed reactions and soil respiration, it is impossible to avoid the conceptual paradox. If one defines carbon quality as a rate at an arbitrarily chosen temperature, it must change the quality assessment with the choice of reference temperature. A derived negative correlation, therefore, cannot be used to support the CQT hypothesis for either soil respiration or uncatalysed reactions. However, it is possible to test the CQT hypothesis for enzyme‐catalysed reactions by using the corresponding uncatalysed rate, i.e., ln( k non ), as the carbon quality index and Q 10 derived from enzyme‐catalysed reactions. Since ln( k non ) and Q 10 of catalysed reactions are two independent variables derived from separate measurements, it avoids any problems of circularity in the derived correlation, thus providing a valid test of the CQT hypothesis for catalysed reactions. For enzyme‐catalysed reactions, we found that there was no correlation between E a and ln( k non ) at 25°C (Figure ; Figure ), or at any chosen reference temperature between 0 and 200°C (Figure ). This lack of correlation between Q 10 of enzyme‐catalysed reactions and ln( k non ), therefore, does not support the CQT hypothesis for enzyme‐catalysed reactions like microbial decomposition of SOM. Instead, our results suggest that under conditions where decomposition rates are controlled by enzymatic process, the temperature sensitivity should be similar regardless of the difference in chemical recalcitrance of the degradable compounds.
THE CATALYTIC POWER OF ENZYMES For enzyme‐catalysed reactions, all E a values remained within the relatively narrow range from about 40 to 70 kJ mol −1 , with no apparent consistent correlation with the quality of the reactants involved. Compared to a much wider range of values from about 90 to 200 kJ mol −1 in uncatalysed reactions, this narrow range of E a for enzyme‐catalysed reactions is also supported by a comprehensive synthesis on the universality of enzymatic rate–temperature dependencies that showed a consistent Q 10 across hundreds of enzymes (Elias et al., ). Indeed, enzymes typically catalyse reactions at time scales of seconds at biological temperatures, even for reactions that have half‐times of millions of years without support by catalysts (Radzicka & Wolfenden, ). Enzymes can achieve this by lowering the energy barrier or E a of the uncatalysed reactions (Figure ). The rates of enzyme‐catalysed reactions ( k cat ) generally range only about 10 4 ‐fold (seconds to hours, Wolfenden, ) while the rates of their corresponding uncatalysed reactions ( k non ) can vary 10 19 ‐fold from 10 −1 to 10 −20 s −1 . This implies vastly different rate enhancements ( k cat / k non ) in catalysing chemical reactions by different enzymes (Radzicka & Wolfenden, ). For enzymes to be able to function meaningfully and productively within current Earth's environment, they must be able to lower the energy barriers of their reactions to similar levels (Figure ; Figure ; Table ), irrespective of the original energy barriers of uncatalysed reactions. For the catalysis of different reactions, enzymes, therefore, must have been able to achieve much greater efficiency enhancements and lowering E a by much greater amounts than for simpler reactions (Figure ; Figure ). This would have also coincidentally resulted in the loss of E a vs. ln( k non ) or Q 10 vs. ln( k non ) correlations that we observed in the uncatalysed reactions (Figure ). We need to re‐emphasise that for biological processes, like SOM decomposition that involve enzyme‐catalysed reactions, the catalytic power of enzymes can overcome the physico‐chemical constraints of uncatalysed reactions as assumed in the CQT hypothesis. Uncatalysed rates can then serve as the carbon quality index that represents the inherent chemical recalcitrance of different reactions. In intact soils, however, physical protection provides an additional carbon stabilisation mechanism to determine the overall stability of organic matter in the soil. The actual temperature dependence of decomposition in soils, therefore, will depend on the combined effect of two independent processes, that is, microbial decomposition and adsorption/desorption, which correspond to chemical recalcitrance and physical protection, respectively. The interaction between the two concomitant, yet thermodynamically independent processes (Numa et al., ; Pignatello, ; Ten Hulscher & Cornelissen, ), could thus lead to contrasting conclusions from different experiments (Conant et al., ). The combined overall effect can then vary with the relative importance and contribution of the two processes.
CONCLUSIONS In summary, our analysis of carbon quality and temperature sensitivity does not support the CQT hypothesis for microbial decomposition of unprotected soil carbon. Regardless of chemical quality, the temperature sensitivity of the enzymatic decomposition of unprotected soil carbon remains similar. This finding suggests that the microbial decomposition of chemically recalcitrant soil carbon is unlikely to respond to warming more strongly than that of labile carbon. However, under a warming climate, the decomposition rate of both recalcitrant and labile carbon will be enhanced, leading to the attendant release of more carbon from soils into the atmosphere (García‐Palacios et al., ). But, contrary to the assertion of the CQT hypothesis, that risk does not appear to be further amplified by a heightened temperature sensitivity of the chemically more recalcitrant fractions of soil carbon.
Lìyǐn L. Liáng, Miko U. F. Kirschbaum, Vickery L. Arcus, and Louis A. Schipper conceived and developed the ideas through countless discussions. Lìyǐn L. Liáng corrected and analysed the data with support from Miko U. F. Kirschbaum, Vickery L. Arcus, and Louis A. Schipper. Lìyǐn L. Liáng wrote the manuscript with contributions from all authors.
The authors declare no competing financial interests.
Appendix S1. Click here for additional data file.
|
High‐dimensional propensity scores for empirical covariate selection in secondary database studies: Planning, implementation, and reporting
|
ee77b126-2269-4236-8f3b-464a2cdfd42a
|
10099872
|
Pharmacology[mh]
|
The high‐dimensional propensity score (hdPS) is an automated, data‐driven analytic approach for covariate selection that empirically identifies pre‐exposure variables and proxies to include in a propensity score model. This paper provides an overview of the hdPS approach, recommendations on the planning, implementation, and reporting of hdPS, and a checklist with key considerations in the use of hdPS. An hdPS implementation involves careful consideration of data dimensions, identification of empirical variables and proxies, prioritization and selection of empirically identified variables, and estimating the propensity score. To promote reproducibility and transparency of studies using real‐world data, reporting documentation should include all key decisions. INTRODUCTION Comparative effectiveness and safety studies using real‐world data are being adopted for regulatory, payer, and clinical decision‐making. However, one major criticism of these nonrandomized studies is the potential presence of unmeasured confounding, which can result in biased estimates of treatment effects. Real‐world evidence (RWE) used for high‐stakes decision‐making must follow the principles of epidemiology in design and analysis, and apply methods to minimize confounding given the lack of randomization. Traditional propensity score (PS) analysis is a commonly used technique. As used in pharmacoepidemiology, a PS is the estimated probability that a patient will be treated with one drug versus an alternative, and summarizes a range of confounders; using a PS, investigators can adjust for a large number of measured preexposure covariates. If all confounders are adjusted for, and the confounding does not vary after exposure, then the treatment effect estimate should be unbiased. If some confounders are not able to be accounted for directly, in a PS or otherwise, the concept of proxy measures may help, particularly when working with secondary data that were not generated to answer a specific research question. Proxy measure adjustment does not require investigators to measure confounders directly and exactly, but rather to measure observable markers correlated with these confounders. For example, frailty is a known confounder in studies examining interventions' effect on mortality in elderly populations, but frailty itself is difficult to measure in claims data. To capture frailty, investigators can use proxies such as use of a wheelchair or oxygen canisters, and use those proxies either directly or as part of a more complex algorithm. Over the last decade, the high‐dimensional propensity score (hdPS) method has emerged as an approach that builds on the idea of large‐scale proxy measurements of unmeasured confounders for improved confounding adjustment in the analysis of healthcare databases. First introduced in 2009, hdPS is an automated, data‐driven analytic approach for covariate selection that empirically identifies preexposure variables (“features” in data science parlance) to include in the PS model. hdPS confers several attractive advantages versus manual identification of confounders and proxies, including data source independence, data‐optimized covariate selection, and the ability to be coupled with traditional PS approaches. The method has been shown to yield similar results as investigator‐driven approaches. Existing guidance documents and user guides touch upon the use of hdPS in pharmacoepidemiology and comparative effectiveness research. , However, we currently lack best practice guidelines explaining when and how to implement hdPS, and we lack guidance to support decision‐makers in fully understanding this method where it has been applied. The paper provides a comprehensive guide on the planning, implementation, and reporting of hdPS approaches for causal treatment effect estimations using longitudinal healthcare databases. We supply a checklist with key considerations as a supportive decision tool to aid investigators in the implementation and transparent reporting of hdPS techniques, and to aid decision‐makers unfamiliar with hdPS in the understanding and interpretation of studies employing this approach. This article is endorsed by the International Society for Pharmacoepidemiology. PREIMPLEMENTATION STUDY PLANNING 2.1 Basic study design The approach to designing and conducting a study that employs hdPS does not vary from other pharmacoepidemiologic analyses: core activities include developing a protocol that details data sources, study design, variable measurements, and a data analysis plan, executing the study according to best practices, and documenting the process following accepted guidelines. , The guidelines for Good Pharmacoepidemiology Practice and ENCePP methodological standards recommend the development of a protocol prior to conducting a study and implementing the analysis, , and this protocol should include known or suspected confounders that should be accounted for. hdPS can be a useful addition should the investigator believe that not all of the confounders are known a priori and/or can be suitably measured. The choice to use hdPS is no different than any other analytic technique in that it its rationale for use and implementation details should be shared as part of the study design. 2.2 Data sources One of the benefits of employing hdPS is the ability to leverage comprehensive longitudinal claims data, and/or electronic health records (EHRs) with deep clinical information, to adjust for confounding. The hdPS approach is data source‐independent in that the hdPS algorithm operates without consideration of the semantics (clinical meaning) of coded or uncoded information; as such, any data source, regardless of data structure or coding systems, can be utilized. While the hdPS approach was first developed using US‐based administrative claims data, the method has been used in geographically diverse datasets, such as UK EHRs, , Danish registry data, French claims data, , , German claims data, and Japanese claims data. Being data source independent, however, does not imply that knowledge of the data source is not important: even with automated variable selection, one should have familiarity with the data source and content of the data to ensure optimal identification of variables to manually include or exclude, as well as for parameter specification for automated covariate identification. Knowledge of the structure of underlying coding systems is particularly important, including how codes are utilized and whether hierarchies among codes may affect interpretation. For example, US administrative claims generally have longitudinal data with inpatient and outpatient diagnoses coded with the International Classification of Diseases, 10th Revision, Clinical Modification (ICD‐10‐CM) coding system, which is hierarchical. By contrast, UK EHRs using their National Health System's READ Codes, are less structured, have varying frequency of recorded data, and have lower granularity. Even among countries that use the same coding systems—ICD‐10 codes are used in many countries worldwide—the way that codes are recorded may not be directly comparable. As an example, while US claims data typically include diagnosis and procedure codes from both inpatient and outpatient settings, the Nordic healthcare system does not capture codes observed in primary care. Understanding the level of data capture, data granularity, and completeness of recording is critical: while the hdPS approach can extract all likely confounders in virtually any data source, it cannot overcome an inherent lack of information. 2.2.3 hdPS implementation steps The following section discusses implementation of the hdPS algorithm, as applied to a specific study question and in specifically selected fit‐for‐purpose data sources. While choices of parameters are discussed through this section, a summarized checklist can be found in Table . 2.4 Selection of data dimensions hdPS variable identification is built upon identifying codes present in one or more data dimensions. A data dimension is a type of patient data—such as inpatient events, outpatient events, drug fills, or lab tests—recorded in healthcare data (Figure ). Rather than looking at all data taken together, hdPS considers data dimensions one at a time to avoid mixing measurements of heterogeneous meaning and quality. Within each dimension, variables are created from the presence of codes in patient records, such as diagnosis codes or drug identifiers; for each of often several thousand codes, patients are noted to have the code present or not present, thus creating a high‐dimensional variable space. Each dimension will have an associated coding system, such as ICD‐10‐CM codes for inpatient procedures, Current Procedural Terminology (CPT) codes for outpatient procedures, and National Drug Codes or generic drug names for outpatient pharmacy drug dispensing. When coding systems are hierarchical, a decision must be made as to what level of the hierarchy to consider. Generally speaking, the lowest level of granularity (highest level of specificity) may be too granular for hdPS, as the prevalence of any given code will tend to be low. Selecting a level that gives an appropriate level of clinical context without too much detail will be most effective. For example, ICD‐10‐CM code E11.3 (Type 2 diabetes mellitus with ophthalmic complications) may provide sufficient confounding information as opposed to a code deeper in the hierarchy, such as E11.321 (Type 2 diabetes mellitus with mild nonproliferative diabetic retinopathy with macular edema) or even E11.3211 (as above, but left eye specifically). To extract additional information from the presence of codes, codes can be further classified by frequency prior to exposure (occurring once, sporadically, or frequently). Extensions to hdPS have also considered temporality relative to exposure (proximal to exposure, evenly distributed, and distal to start) (Figure ). With the codes and the variations considered, a typical hdPS analysis may consider thousands of variables for each patient. Table contains examples of data dimensions used in various data sources in North America, Europe, and Japan. Typical data dimensions specified in US claims data are inpatient and outpatient diagnostic and procedures and drug dispensing. However, other data dimensions such as staging and biomarker information for an oncology study may be specified as needed for specific study questions, as available in specific data sources. 2.5 Identification of empirical variables and proxies The hdPS algorithm begins with identification and measurement of variables and proxies (Figure , Step 1). , All variables automatically created from healthcare databases are called “empirically‐identified” variables, which contrast with more traditional “investigator‐specified” variables. All of these variables, each a potential confounding factor, are identified during a covariate assessment window, usually defined as the time period covering the assessment of baseline patient covariates and prior to study entry (index date) (Figure ). Typically, measurement of nontime‐varying factors after index date would lead to bias by adjustment for intermediates; whether to measure factors on the index date itself is a study‐specific choice. The hdPS algorithm considers distinct codes as recorded in each dimension—without needing to understand their specific meaning—and turns these codes into dichotomous variables. Codes are considered as yes/no values indicating the presence of each code during the covariate assessment window, and are ranked according to prevalence within the dimension (Figure , Step 2). Because the variable‐generating algorithm is agnostic to the semantics of each feature, it can therefore be applied to almost any structured or unstructured data source and coding systems. The hdPS originally developed by Schneeweiss et al. suggested considering the 200 most prevalent codes in each data dimension. There is debate as to the optimal maximum number of most prevalent codes to specify. In practice, going beyond 100 prevalent codes likely makes little difference, depending on the data source and data type. In Scandinavian data sources with less rich data than those in for example, US claims data, Hallas and Pottegård showed that going above 100 covariates per dimension (200 total covariates in their case) demonstrated no additional improvement; the additional covariates added were false for almost all study individuals. Schuster et al. also explicitly omitted codes with very low prevalence or very infrequent occurrence, and it has been argued that the prevalence filter may not be necessary. At a number that's sufficiently large, the precise choice of n may not strongly impact study results. Once the n most prevalent codes in each data dimension are identified, the algorithm creates three binary intensity variables for each code, indicating at least one occurrence of the code over the covariate assessment window, sporadic occurrences of the code, and many occurrences of the code (Figures and , Step 3). The high number of codes considered leads to the high dimensionality of the algorithm. A typical example with five data dimensions (inpatient diagnoses, inpatient procedures, outpatient diagnoses, outpatient procedures, pharmacy dispensing) yields up to 3000 binary variables per patient (five data dimensions * n = 200 prevalent codes per code dimension * three levels of frequency per code). Additional dimensions, such as lab test results, biomarker status, or words or phrases in free text notes, or more variables in each dimension, would lead to substantially larger numbers of candidate variables. 2.6 Prioritization and selection of empirical variables Successful confounding adjustment with PSs controls for all risk factors associated with the outcome even if they are seemingly unrelated to treatment choice or weakly associated with the outcome of interest. , , One problem with a high number of risk factors in a PS model, however, is the practical challenge of estimating patients' PSs. For example, including all 3000 variables from the above example without prioritization or selection is likely unfeasible with standard logistic regression. Including too many variables would also lead to inefficiencies due to collinearity and possible bias amplification by including instrumental variables (IVs, variables associated with exposure but not associated with outcome, more below). Therefore, hdPS uses a heuristic process to determine which of the variables appear most important to include in the PS model. The basic hdPS algorithm reduces the large number of candidate covariates by prioritizing covariates using a scoring algorithm and selecting covariates for inclusion into the PS the k of those that score highest (Figure , Steps 4 and 5). Schneeweiss et al. noted that k = 500 compared with k = 200 covariates yielded little change to the effect estimate. Likewise, in an analysis using German statutory health insurance data, the authors noticed an insubstantial change in results when varying the number of covariates from k = 500 to k = 100, 200, and 1000 covariates. A traditional PS variable selection algorithm would prioritize variables according to their association with exposure (RR CE ). This may not work with hdPS, however, because as the candidate variables are empirically identified proxies as opposed to a priori specified confounders, the pool of candidates may contain both confounders and IVs. Alternatively, a scoring algorithm prioritizing variables by their outcome association (RR CD ) may not overlook variables that are important predictors of exposure (the focus of PS estimation), though with that said, debate is ongoing on the utility of the outcome ranking method. In most cases, a combination of the two is used: the original hdPS algorithm employed the formula by Bross which scores variables based on the observed joint association between covariate and outcome (RR CD ) and covariate and exposure (RR CE ) (Figure ). , , While hdPS to date has generally considered variables one by one, more advanced implementations, such as machine learning algorithms to identify predictors of the outcome or ensemble methods pooling multiple machine learning algorithms, , , , , or the use of regularized regression in related techniques such as large‐scale propensity scores, have been demonstrated. , , , , With that said, the Bross approach has been observed to be effective and durable, and is recommended for most applications. 2.7 Including investigator‐specified covariates While hdPS is generally effective at identifying and selecting variables that are measured with recorded codes—so much so that investigator specification of such variables may not be required at all —other variables will likely need to be entered specifically by the investigator. In any hdPS analysis, it is strongly recommended to specifically include patient attributes such as age, sex, and other measured factors that may be confounders. It is also recommended to include typical health service utilization variables such number of office visits, number of drug prescriptions filled, total cost of inpatient or outpatient care, or number of unique medications dispensed, as these are generally good markers of health status and disease severity. , , , Like other covariates, these markers are measured over the covariate assessment period, or over a standard period such as 6 months or 1 year prior to cohort entry. Further investigator‐specified covariates can also be included. While doing so may introduce collinearity between investigator‐specified variables and those identified by hdPS—which can affect interpretability of the PS model coefficients but does not negatively impact the PS itself—explicitly incorporating the subject‐matter expertise of the investigator may provide additional levels of transparency and interpretability, since these prespecified variables are apparent and verifiable in a typical “Table .” 2.8 Excluding instrumental variables and colliders While PSs tend to be forgiving with respect to what variables are included, two sources of bias introduced by variable inclusion are well‐documented: “Z‐bias” and “M‐bias,” each of which is described below. From the outset, however, we note that while Z‐bias should be actively avoided, M‐bias tends not to be an issue in day‐to‐day practice. As briefly described above, an IV is a variable associated with the treatment assignment but not the outcome; the canonical IV is the random treatment assignment in an randomized clinical trial. Adjusting for an IV, often denoted Z, may increase the bias (Figure ). It is well known that IVs should not be included in a PS, high‐dimensional or otherwise. , , , , Using the typical prioritization with the Bross formula—which considers variables' joint association between exposure and outcome—may help avoid Z‐bias, as the Bross prioritization tends not select variables that only have an exposure association. However, to the extent that IVs can be identified either a priori or through inspection of hdPS's selected variables, they should be manually removed. One common way to identify potential IVs is to score all variables by quintile of exposure association and outcome association. Inspecting those variables in the top quintile of exposure association and bottom quintile of outcome association may help identify IVs. As a practical matter, if it is unclear whether a variable is an IV or confounder, erring on the side of assuming it is a confounder is likely the safer choice in nonrandomized research. Separately, colliders—variables that are the common effect of exposure and outcome, or a common effect of two variables that themselves each affect exposure or outcome—should also be excluded from a PS (Figure ). Colliders may be more difficult to identify than IVs, though consistently measuring variables prior to the index date will tend to minimize their presence. A simulation study showed that bias due to controlling for a collider—M‐bias, so named because when collider relationships are plotted in a directed acyclic graph, they often resemble the letter M—was small, unless associations between the collider and unmeasured confounders were very large (relative risk > 8). As above, controlling for confounding should take precedence over avoiding M‐bias. 2.9 Estimating the propensity score The steps above will yield long lists of prioritized covariates, which should collectively capture a substantial portion of the underlying confounding. The final step is to estimate a PS, and to use that PS to control for confounding. PSs are often estimated using logistic regression, and as such, the standard estimation method for the hdPS is to use logistic regression to predict the probability of exposure as a function of all hdPS covariates, investigator‐specified and empirically identified. PSs are designed to reduce a large number of covariates into a single value, but in the hdPS case, the number of those covariates can be quite large. , Estimation of any PS is limited by the quantity of source data, and the usual recommendation is to not exceed 1 covariate in the model for every 7–10 exposed patients. For hdPS models, where the number of covariates can be large, a substantial number of exposed patients may be required for proper estimation of the hdPS. This summary score is useful in many cases, including when there are a large number of covariates and a small number of outcomes. In those instances, parametric and regularized outcome regression have been recognized to have inadequate confounding adjustment. , 2.10 Estimating the treatment effect While the nuances of the application of PSs for confounding adjustment are outside the scope of this article, we note that once estimated, the hdPS will function as a traditional PS, and traditional approaches including matching, weighting, , , stratification, and fine stratification are all appropriate with hdPS (Figure , Steps 6 and 7). Basic study design The approach to designing and conducting a study that employs hdPS does not vary from other pharmacoepidemiologic analyses: core activities include developing a protocol that details data sources, study design, variable measurements, and a data analysis plan, executing the study according to best practices, and documenting the process following accepted guidelines. , The guidelines for Good Pharmacoepidemiology Practice and ENCePP methodological standards recommend the development of a protocol prior to conducting a study and implementing the analysis, , and this protocol should include known or suspected confounders that should be accounted for. hdPS can be a useful addition should the investigator believe that not all of the confounders are known a priori and/or can be suitably measured. The choice to use hdPS is no different than any other analytic technique in that it its rationale for use and implementation details should be shared as part of the study design. Data sources One of the benefits of employing hdPS is the ability to leverage comprehensive longitudinal claims data, and/or electronic health records (EHRs) with deep clinical information, to adjust for confounding. The hdPS approach is data source‐independent in that the hdPS algorithm operates without consideration of the semantics (clinical meaning) of coded or uncoded information; as such, any data source, regardless of data structure or coding systems, can be utilized. While the hdPS approach was first developed using US‐based administrative claims data, the method has been used in geographically diverse datasets, such as UK EHRs, , Danish registry data, French claims data, , , German claims data, and Japanese claims data. Being data source independent, however, does not imply that knowledge of the data source is not important: even with automated variable selection, one should have familiarity with the data source and content of the data to ensure optimal identification of variables to manually include or exclude, as well as for parameter specification for automated covariate identification. Knowledge of the structure of underlying coding systems is particularly important, including how codes are utilized and whether hierarchies among codes may affect interpretation. For example, US administrative claims generally have longitudinal data with inpatient and outpatient diagnoses coded with the International Classification of Diseases, 10th Revision, Clinical Modification (ICD‐10‐CM) coding system, which is hierarchical. By contrast, UK EHRs using their National Health System's READ Codes, are less structured, have varying frequency of recorded data, and have lower granularity. Even among countries that use the same coding systems—ICD‐10 codes are used in many countries worldwide—the way that codes are recorded may not be directly comparable. As an example, while US claims data typically include diagnosis and procedure codes from both inpatient and outpatient settings, the Nordic healthcare system does not capture codes observed in primary care. Understanding the level of data capture, data granularity, and completeness of recording is critical: while the hdPS approach can extract all likely confounders in virtually any data source, it cannot overcome an inherent lack of information. 2.2.3 hdPS implementation steps The following section discusses implementation of the hdPS algorithm, as applied to a specific study question and in specifically selected fit‐for‐purpose data sources. While choices of parameters are discussed through this section, a summarized checklist can be found in Table . hdPS implementation steps The following section discusses implementation of the hdPS algorithm, as applied to a specific study question and in specifically selected fit‐for‐purpose data sources. While choices of parameters are discussed through this section, a summarized checklist can be found in Table . Selection of data dimensions hdPS variable identification is built upon identifying codes present in one or more data dimensions. A data dimension is a type of patient data—such as inpatient events, outpatient events, drug fills, or lab tests—recorded in healthcare data (Figure ). Rather than looking at all data taken together, hdPS considers data dimensions one at a time to avoid mixing measurements of heterogeneous meaning and quality. Within each dimension, variables are created from the presence of codes in patient records, such as diagnosis codes or drug identifiers; for each of often several thousand codes, patients are noted to have the code present or not present, thus creating a high‐dimensional variable space. Each dimension will have an associated coding system, such as ICD‐10‐CM codes for inpatient procedures, Current Procedural Terminology (CPT) codes for outpatient procedures, and National Drug Codes or generic drug names for outpatient pharmacy drug dispensing. When coding systems are hierarchical, a decision must be made as to what level of the hierarchy to consider. Generally speaking, the lowest level of granularity (highest level of specificity) may be too granular for hdPS, as the prevalence of any given code will tend to be low. Selecting a level that gives an appropriate level of clinical context without too much detail will be most effective. For example, ICD‐10‐CM code E11.3 (Type 2 diabetes mellitus with ophthalmic complications) may provide sufficient confounding information as opposed to a code deeper in the hierarchy, such as E11.321 (Type 2 diabetes mellitus with mild nonproliferative diabetic retinopathy with macular edema) or even E11.3211 (as above, but left eye specifically). To extract additional information from the presence of codes, codes can be further classified by frequency prior to exposure (occurring once, sporadically, or frequently). Extensions to hdPS have also considered temporality relative to exposure (proximal to exposure, evenly distributed, and distal to start) (Figure ). With the codes and the variations considered, a typical hdPS analysis may consider thousands of variables for each patient. Table contains examples of data dimensions used in various data sources in North America, Europe, and Japan. Typical data dimensions specified in US claims data are inpatient and outpatient diagnostic and procedures and drug dispensing. However, other data dimensions such as staging and biomarker information for an oncology study may be specified as needed for specific study questions, as available in specific data sources. Identification of empirical variables and proxies The hdPS algorithm begins with identification and measurement of variables and proxies (Figure , Step 1). , All variables automatically created from healthcare databases are called “empirically‐identified” variables, which contrast with more traditional “investigator‐specified” variables. All of these variables, each a potential confounding factor, are identified during a covariate assessment window, usually defined as the time period covering the assessment of baseline patient covariates and prior to study entry (index date) (Figure ). Typically, measurement of nontime‐varying factors after index date would lead to bias by adjustment for intermediates; whether to measure factors on the index date itself is a study‐specific choice. The hdPS algorithm considers distinct codes as recorded in each dimension—without needing to understand their specific meaning—and turns these codes into dichotomous variables. Codes are considered as yes/no values indicating the presence of each code during the covariate assessment window, and are ranked according to prevalence within the dimension (Figure , Step 2). Because the variable‐generating algorithm is agnostic to the semantics of each feature, it can therefore be applied to almost any structured or unstructured data source and coding systems. The hdPS originally developed by Schneeweiss et al. suggested considering the 200 most prevalent codes in each data dimension. There is debate as to the optimal maximum number of most prevalent codes to specify. In practice, going beyond 100 prevalent codes likely makes little difference, depending on the data source and data type. In Scandinavian data sources with less rich data than those in for example, US claims data, Hallas and Pottegård showed that going above 100 covariates per dimension (200 total covariates in their case) demonstrated no additional improvement; the additional covariates added were false for almost all study individuals. Schuster et al. also explicitly omitted codes with very low prevalence or very infrequent occurrence, and it has been argued that the prevalence filter may not be necessary. At a number that's sufficiently large, the precise choice of n may not strongly impact study results. Once the n most prevalent codes in each data dimension are identified, the algorithm creates three binary intensity variables for each code, indicating at least one occurrence of the code over the covariate assessment window, sporadic occurrences of the code, and many occurrences of the code (Figures and , Step 3). The high number of codes considered leads to the high dimensionality of the algorithm. A typical example with five data dimensions (inpatient diagnoses, inpatient procedures, outpatient diagnoses, outpatient procedures, pharmacy dispensing) yields up to 3000 binary variables per patient (five data dimensions * n = 200 prevalent codes per code dimension * three levels of frequency per code). Additional dimensions, such as lab test results, biomarker status, or words or phrases in free text notes, or more variables in each dimension, would lead to substantially larger numbers of candidate variables. Prioritization and selection of empirical variables Successful confounding adjustment with PSs controls for all risk factors associated with the outcome even if they are seemingly unrelated to treatment choice or weakly associated with the outcome of interest. , , One problem with a high number of risk factors in a PS model, however, is the practical challenge of estimating patients' PSs. For example, including all 3000 variables from the above example without prioritization or selection is likely unfeasible with standard logistic regression. Including too many variables would also lead to inefficiencies due to collinearity and possible bias amplification by including instrumental variables (IVs, variables associated with exposure but not associated with outcome, more below). Therefore, hdPS uses a heuristic process to determine which of the variables appear most important to include in the PS model. The basic hdPS algorithm reduces the large number of candidate covariates by prioritizing covariates using a scoring algorithm and selecting covariates for inclusion into the PS the k of those that score highest (Figure , Steps 4 and 5). Schneeweiss et al. noted that k = 500 compared with k = 200 covariates yielded little change to the effect estimate. Likewise, in an analysis using German statutory health insurance data, the authors noticed an insubstantial change in results when varying the number of covariates from k = 500 to k = 100, 200, and 1000 covariates. A traditional PS variable selection algorithm would prioritize variables according to their association with exposure (RR CE ). This may not work with hdPS, however, because as the candidate variables are empirically identified proxies as opposed to a priori specified confounders, the pool of candidates may contain both confounders and IVs. Alternatively, a scoring algorithm prioritizing variables by their outcome association (RR CD ) may not overlook variables that are important predictors of exposure (the focus of PS estimation), though with that said, debate is ongoing on the utility of the outcome ranking method. In most cases, a combination of the two is used: the original hdPS algorithm employed the formula by Bross which scores variables based on the observed joint association between covariate and outcome (RR CD ) and covariate and exposure (RR CE ) (Figure ). , , While hdPS to date has generally considered variables one by one, more advanced implementations, such as machine learning algorithms to identify predictors of the outcome or ensemble methods pooling multiple machine learning algorithms, , , , , or the use of regularized regression in related techniques such as large‐scale propensity scores, have been demonstrated. , , , , With that said, the Bross approach has been observed to be effective and durable, and is recommended for most applications. Including investigator‐specified covariates While hdPS is generally effective at identifying and selecting variables that are measured with recorded codes—so much so that investigator specification of such variables may not be required at all —other variables will likely need to be entered specifically by the investigator. In any hdPS analysis, it is strongly recommended to specifically include patient attributes such as age, sex, and other measured factors that may be confounders. It is also recommended to include typical health service utilization variables such number of office visits, number of drug prescriptions filled, total cost of inpatient or outpatient care, or number of unique medications dispensed, as these are generally good markers of health status and disease severity. , , , Like other covariates, these markers are measured over the covariate assessment period, or over a standard period such as 6 months or 1 year prior to cohort entry. Further investigator‐specified covariates can also be included. While doing so may introduce collinearity between investigator‐specified variables and those identified by hdPS—which can affect interpretability of the PS model coefficients but does not negatively impact the PS itself—explicitly incorporating the subject‐matter expertise of the investigator may provide additional levels of transparency and interpretability, since these prespecified variables are apparent and verifiable in a typical “Table .” Excluding instrumental variables and colliders While PSs tend to be forgiving with respect to what variables are included, two sources of bias introduced by variable inclusion are well‐documented: “Z‐bias” and “M‐bias,” each of which is described below. From the outset, however, we note that while Z‐bias should be actively avoided, M‐bias tends not to be an issue in day‐to‐day practice. As briefly described above, an IV is a variable associated with the treatment assignment but not the outcome; the canonical IV is the random treatment assignment in an randomized clinical trial. Adjusting for an IV, often denoted Z, may increase the bias (Figure ). It is well known that IVs should not be included in a PS, high‐dimensional or otherwise. , , , , Using the typical prioritization with the Bross formula—which considers variables' joint association between exposure and outcome—may help avoid Z‐bias, as the Bross prioritization tends not select variables that only have an exposure association. However, to the extent that IVs can be identified either a priori or through inspection of hdPS's selected variables, they should be manually removed. One common way to identify potential IVs is to score all variables by quintile of exposure association and outcome association. Inspecting those variables in the top quintile of exposure association and bottom quintile of outcome association may help identify IVs. As a practical matter, if it is unclear whether a variable is an IV or confounder, erring on the side of assuming it is a confounder is likely the safer choice in nonrandomized research. Separately, colliders—variables that are the common effect of exposure and outcome, or a common effect of two variables that themselves each affect exposure or outcome—should also be excluded from a PS (Figure ). Colliders may be more difficult to identify than IVs, though consistently measuring variables prior to the index date will tend to minimize their presence. A simulation study showed that bias due to controlling for a collider—M‐bias, so named because when collider relationships are plotted in a directed acyclic graph, they often resemble the letter M—was small, unless associations between the collider and unmeasured confounders were very large (relative risk > 8). As above, controlling for confounding should take precedence over avoiding M‐bias. Estimating the propensity score The steps above will yield long lists of prioritized covariates, which should collectively capture a substantial portion of the underlying confounding. The final step is to estimate a PS, and to use that PS to control for confounding. PSs are often estimated using logistic regression, and as such, the standard estimation method for the hdPS is to use logistic regression to predict the probability of exposure as a function of all hdPS covariates, investigator‐specified and empirically identified. PSs are designed to reduce a large number of covariates into a single value, but in the hdPS case, the number of those covariates can be quite large. , Estimation of any PS is limited by the quantity of source data, and the usual recommendation is to not exceed 1 covariate in the model for every 7–10 exposed patients. For hdPS models, where the number of covariates can be large, a substantial number of exposed patients may be required for proper estimation of the hdPS. This summary score is useful in many cases, including when there are a large number of covariates and a small number of outcomes. In those instances, parametric and regularized outcome regression have been recognized to have inadequate confounding adjustment. , Estimating the treatment effect While the nuances of the application of PSs for confounding adjustment are outside the scope of this article, we note that once estimated, the hdPS will function as a traditional PS, and traditional approaches including matching, weighting, , , stratification, and fine stratification are all appropriate with hdPS (Figure , Steps 6 and 7). MEASURING hdPS PERFORMANCE Diagnostic tools are frequently used to evaluate the performance of analytic approaches, and the diagnostics for hdPS demonstrate or illustrate several of the items noted above: that balance on measured covariates has been achieved, that instruments have been removed, and that to the best of the investigator's ability, confounding has been accounted for. 3.1 Covariate balance diagnostics Because PS methods are intended to control for confounding by balancing covariates between exposed and referent patients, demonstrating qualitative success in doing so is typically achieved by constructing a “Table ” outlining baseline patient characteristics of study participants before and after PS adjustment, with the goal of showing that baseline characteristics are balanced between the two comparison groups. In a typical PS analysis, the variables in this Table are generally those variables that were entered into the PS model; with hdPS, a typical Table would have all investigator‐specified variables, with additional empirically identified variables appearing in a supplementary or online table. Inclusion of variables not specified by the investigator but that may have an expectation of imbalance in the Table can help verify whether treatment group imbalance has in general been resolved by the hdPS. More quantitatively, balance‐checking techniques are recommended for both investigator‐specified covariates (including key demographic variables like age and sex) and empirically identified variables. A common diagnostic to demonstrate balance between two comparison groups is to report for each variable the absolute standardized mean difference between the two treatment groups; this value is calculated as the absolute value of the difference in standardized mean in each group. An absolute standardized mean difference of 0.1 or less is an often‐used threshold to indicate adequate balance between treatment groups. A number of other diagnostics are also commonly employed. With that said, for empirically identified variables, imbalance may result for reasons that do not indicate lack of comparability between the exposure and comparator groups. For example, if an empirically identified variable impacts the outcome but not exposure, it may appear imbalanced; however, it may well be appropriate to include it in the PS, and since it is de facto not a confounder, no bias should result. Separately, if an empirically identified variable is strongly correlated with other empirically identified or investigator‐specified variable, balance may be achieved among the correlates but not the variable in question. For that reason, not all residual imbalances of individual variables result in bias, but they need inspection and explanation to the extent possible. , , 3.2 Graphical diagnostics Visualizations are also helpful to visualize the performance of covariate balance and comparability between comparison groups. Typical visualizations include plots of the PS distribution before and after matching or weighting, and plots of standardized differences before and after application of the hdPS. For example, to demonstrate the performance of hdPS‐matching, Blin et al. presented the standardized mean differences before and after matching as well as the overlap in hdPS distribution, which can help identify cases of nonpositivity (Figure ). It is noted that these visualizations are not unique to hdPS and are suggested for any PS analysis. A useful hdPS‐specific diagnostic is a forest plot of the estimated treatment effects as sequentially more confounding adjustment is applied, displaying the unadjusted (crude) estimate, the estimate after adjustment with key demographic covariates (e.g., age and sex), the estimate with adjustment for all investigator‐identified covariates, and the estimate after hdPS has been applied. Such a plot has the ability to show the added value (or perhaps lack of value) of including the empirically identified covariates, as measured relative to a known ground truth. Another visualization that can be useful is a plot of the treatment effect estimate as additional empirically identified covariates are added to the hdPS model (Figure ). If the estimate with 50 versus 100 variables is substantially different, this implies that the addition of 50 variables to the hdPS was useful in additional confounding control. On the other hand, if a large number of variables are added and there is no change to the treatment effect estimate, then that suggests that a more parsimonious hdPS model may be appropriate. Covariate balance diagnostics Because PS methods are intended to control for confounding by balancing covariates between exposed and referent patients, demonstrating qualitative success in doing so is typically achieved by constructing a “Table ” outlining baseline patient characteristics of study participants before and after PS adjustment, with the goal of showing that baseline characteristics are balanced between the two comparison groups. In a typical PS analysis, the variables in this Table are generally those variables that were entered into the PS model; with hdPS, a typical Table would have all investigator‐specified variables, with additional empirically identified variables appearing in a supplementary or online table. Inclusion of variables not specified by the investigator but that may have an expectation of imbalance in the Table can help verify whether treatment group imbalance has in general been resolved by the hdPS. More quantitatively, balance‐checking techniques are recommended for both investigator‐specified covariates (including key demographic variables like age and sex) and empirically identified variables. A common diagnostic to demonstrate balance between two comparison groups is to report for each variable the absolute standardized mean difference between the two treatment groups; this value is calculated as the absolute value of the difference in standardized mean in each group. An absolute standardized mean difference of 0.1 or less is an often‐used threshold to indicate adequate balance between treatment groups. A number of other diagnostics are also commonly employed. With that said, for empirically identified variables, imbalance may result for reasons that do not indicate lack of comparability between the exposure and comparator groups. For example, if an empirically identified variable impacts the outcome but not exposure, it may appear imbalanced; however, it may well be appropriate to include it in the PS, and since it is de facto not a confounder, no bias should result. Separately, if an empirically identified variable is strongly correlated with other empirically identified or investigator‐specified variable, balance may be achieved among the correlates but not the variable in question. For that reason, not all residual imbalances of individual variables result in bias, but they need inspection and explanation to the extent possible. , , Graphical diagnostics Visualizations are also helpful to visualize the performance of covariate balance and comparability between comparison groups. Typical visualizations include plots of the PS distribution before and after matching or weighting, and plots of standardized differences before and after application of the hdPS. For example, to demonstrate the performance of hdPS‐matching, Blin et al. presented the standardized mean differences before and after matching as well as the overlap in hdPS distribution, which can help identify cases of nonpositivity (Figure ). It is noted that these visualizations are not unique to hdPS and are suggested for any PS analysis. A useful hdPS‐specific diagnostic is a forest plot of the estimated treatment effects as sequentially more confounding adjustment is applied, displaying the unadjusted (crude) estimate, the estimate after adjustment with key demographic covariates (e.g., age and sex), the estimate with adjustment for all investigator‐identified covariates, and the estimate after hdPS has been applied. Such a plot has the ability to show the added value (or perhaps lack of value) of including the empirically identified covariates, as measured relative to a known ground truth. Another visualization that can be useful is a plot of the treatment effect estimate as additional empirically identified covariates are added to the hdPS model (Figure ). If the estimate with 50 versus 100 variables is substantially different, this implies that the addition of 50 variables to the hdPS was useful in additional confounding control. On the other hand, if a large number of variables are added and there is no change to the treatment effect estimate, then that suggests that a more parsimonious hdPS model may be appropriate. TRANSPARENCY AND DOCUMENTATION Overall efforts to improve the reproducibility and transparency of studies using real‐world data are broadly underway. , For example, Wang et al. developed a structured template to aid in planning and reporting study methods, including hdPS if used, and recommend including key specifications such as the algorithm for covariate definition and other parameters (e.g., covariate assessment window, code type and granularity, diagnosis position) (Table ). Though not exhaustive, the following are items that should be reported and documented, first as part of a study protocol, and later as part of a final study report. By and large, the items below are syntheses of the decisions discussed above and thus will be familiar. Parameters for covariate identification. Within hdPS, decisions around how covariates will be identified, ranked and selected should be prespecified and documented. These parameters include which data dimensions will be considered (e.g., inpatient, outpatient, pharmacy); which coding systems will be used (e.g., ICD‐9‐CM, ICD‐10‐CM, CPT); to what level of detail the codes will be captured (e.g., the first three characters of ICD‐10‐CM codes); how many codes per dimension will be considered; how many variables will be included overall; and what ranking method will be applied (e.g., bias ranking, exposure association ranking). Investigator‐specified variables . As in all pharmacoepidemiology studies, noting a priori what confounders the investigators deem important to specifically adjust for is an important part of the analysis plan. Unlike typical protocols that include all variables, with hdPS, only investigator‐identified variables will be prespecified since the hdPS approach will empirically select further variables. Investigator‐specified excluded variables. Investigators should note any variables they consider instruments—and thus not appropriate to include in the hdPS—ahead of time. , , , Such variables would include direct or near‐direct proxies for exposure. If further variables are excluded over the course of the study, those should be documented in the final study report. Estimation and use of PS. As with any PS, the method for estimating the hdPS (e.g., logistic regression) should be noted, along with any criteria for removing variables that may affect the estimation (e.g., employing a prevalence filter, such as not having at least five exposed and five referent patients). Furthermore, how the hdPS will be used in the analysis (e.g., matching, weighting) should be noted, as well as the software environment in which the score will be estimated and used. Diagnostics and reporting. The diagnostics to be employed (e.g., inspection of selected variables, surveilling for IVs) along with actions to take should anomalies be detected should be noted, as should other output (e.g., PS distribution plots, sequential addition of variables plot). We also recommend including a detailed list of variables included along with interpretable descriptions (e.g., the ICD‐10‐CM code description alongside the ICD‐10‐CM code) as a table or supplemental appendix. Software can aid in creating this list. Sensitivity analyses . While the decisions noted above should be made a priori, investigators may wish to vary certain parameters to determine robustness of the result or otherwise test their assumptions. For example, investigators may choose to conduct sensitivity analysis varying the confounder selection strategy with or without investigator‐identified covariates. To the extent possible, these sensitivity analyses should also be specified ahead of time, while acknowledging that certain variations may be made in response to observed data or observed performance of the hdPS. Any post hoc sensitivity analyses should be called out as such in the final study report. LIMITATIONS AND MISCONCEPTIONS Since its original publication, a number of limitations and misconceptions regarding hdPS have emerged. A first misconception is that data‐adaptive methods that consider hundreds of covariates for estimating the PS will lead to “over‐adjustment,” but it has been shown that the exposure effect size estimation should remain consistent even with additional covariates. With that said, adjusting for too many preexposure may lead to statistical inefficiency, so if a larger number of covariates are desired, principled data‐adaptive PS estimation such as crossvalidation methods like Super Learning (SL) can be used to protect against overfitting when estimating the PS. There is also concern that liberal variable selection—including colliders and IVs—will lead to the introduction of M‐bias and Z‐bias, respectively. We would argue that the true threat to pharmacoepidemiology studies is unmeasured confounding, and as such, M‐ and Z‐bias are second‐order concerns. Furthermore, M‐ and Z‐biases are themselves mitigated with good study design (to avoid the introduction of colliders to begin with) and strong control of unmeasured confounding. As discussed earlier, any M‐bias will most likely be small, , and the careful measurement of covariates prior to exposure is a way to avoid including many colliders. Similarly, while Z‐bias may amplify any unmeasured confounding when IVs are included in the PS, Z‐bias's effect is greatly reduced by reducing the presence of unmeasured confounding. Unmeasured confounding remains the top problem to solve. Some consider hdPS to be a black box with limited transparency. While it is true that the hdPS method does not allow investigators to know the covariates that will be empirically identified a priori, the specific parameter settings of an hdPS algorithm can and should be prespecified and remain unchanged through the primary analysis. And while they are not known a priori, all selected variables are fully traceable back to source data, and their impact on baseline covariate balance can be assessed through the calculation and reporting of standardized differences. hdPS can sometimes bring to light the limitations of the source data or of the research question asked. While hdPS extracts the maximum confounding information available in a database via proxy analytics to adjust for unmeasured confounding, a given data source may inherently lack data dimensions that are required to reduce residual confounding to an acceptable level. , hdPS is not a statistical technique to resolve poor data source selection, insufficient data content, or incorrect study design. The performance of hdPS may be impacted by small sample sizes, including small cohorts, few exposures, and/or few outcome events. For example, because the PS model predicts exposure, PS estimation may be challenging when the number of exposed patients is small. However, in a study where investigators sampled data from four North American cohort studies and applied hdPS methods on the samples, they obtained similar hdPS‐adjusted point estimates in the samples relative to the full‐cohorts when there were at least 50 exposed patients with an outcome event. hdPS performed well in samples with 25–49 exposed patients with an outcome event when a zero‐cell correction was applied. Zero‐cell correction allows computation of the association between the variable and outcome by adding 0.1 to each cell in the 2 × 2 table, making computable values from values that are noncomputable due to division by zero. NEW DIRECTIONS Since the publication of the original hdPS method, a number of extensions and other developments have been shown. Below are several examples of new directions that hdPS has gone in. 6.1 Treatment effect estimation The hdPS approach has most typically been applied to evaluate the effect of a static, binary treatment using PS matching. In more recent applications, hdPS was combined with alternate treatment effect estimation approaches such as inverse probability weighting and collaborative targeted minimum loss based estimation. , This was done to take advantage of these methods' improved statistical properties over PS matching, such as the ability to properly adjust for time‐dependent confounders and sources of selection bias, to employ double robustness, and to evaluate alternate causal estimands, such as the average effects of time‐varying dynamic treatment regimens. With that said, whatever the causal estimand and estimator chosen, hdPS at its core can be viewed as a pragmatic approach to automate selection of the covariate adjustment set in the analysis of healthcare databases. 6.2 Estimation of the PS After identifying hdPS‐derived covariates, the investigator must use the covariates to estimate a PS for each patient, or for outcome regression in the case of doubly robust estimation of the causal effect. The standard logistic regression estimation methods rely on parametric assumptions such as the assumption that a PS or outcome regression model can be correctly represented by a logistic linear model with only main terms for each covariate and no interactions. Incorrect causal inferences are expected if these—often arbitrary—modeling assumptions do not hold, for example if the logit link between the linear part of the model and PS is incorrect. Finite sample bias and increased variability can also be expected when a large number of hdPS‐derived covariates are included in the parametric models. To protect against incorrect inferences due to mis‐specified parametric models and to automate dimensionality reduction of the covariate adjustment set (e.g., to reduce collinearity), statistical learning can be used to nonparametrically estimate a PS or outcome regression based on empirically identified and investigator‐specified variables while maintaining explainability using, for example, Shapley Additive Explanation values. SL—an ensemble learning method—is one such approach that was proposed to improve confounding adjustment with hdPS covariates. SL is a data‐adaptive estimation algorithm that combines, through a weighted average, predicted values from a library of candidate learners such as neural networks, random forests, gradient boosting machines, and parametric models—all possible methods of estimating patients' PSs. The selection of the optimal combination of learners is based on crossvalidation to protect against overfitting. The resulting learner (called the “super learner”) is intended to perform asymptotically as well or better (in terms of mean error) than any of the candidate learners considered—and the number of candidate learners can grow as large as is computationally feasible. The practical performance of combining hdPS with SL for confounding adjustment has been illustrated using both real‐world and simulated data. , Future research is needed to evaluate the value of alternate methodologies such as deep learning. 6.3 Other new directions 6.3.1 Unstructured data hdPS typically works with structured, coded data. However, using natural language processing methods, it is also possible to convert free‐text into tokens, which can stand on their own as potential variables. These data may give additional information beyond what is coded in diagnosis, procedure, medication and other fields, especially when electronic medical records are used as source data. 6.3.2 Continuous covariates and outcomes The Bross formula typically used is intended for use with binary covariates and outcomes, but in many cases, continuous values for one or both may be appropriate. Extensions to the ranking formula can incorporate such continuous values. 6.3.3 Combination matching or weighting methods Most studies that match or weight with a PS do so exclusively with the PS variable. However, it is also possible to match (weight) on specific key investigator‐identified factors, and then match (weight) on a PS. Treatment effect estimation The hdPS approach has most typically been applied to evaluate the effect of a static, binary treatment using PS matching. In more recent applications, hdPS was combined with alternate treatment effect estimation approaches such as inverse probability weighting and collaborative targeted minimum loss based estimation. , This was done to take advantage of these methods' improved statistical properties over PS matching, such as the ability to properly adjust for time‐dependent confounders and sources of selection bias, to employ double robustness, and to evaluate alternate causal estimands, such as the average effects of time‐varying dynamic treatment regimens. With that said, whatever the causal estimand and estimator chosen, hdPS at its core can be viewed as a pragmatic approach to automate selection of the covariate adjustment set in the analysis of healthcare databases. Estimation of the PS After identifying hdPS‐derived covariates, the investigator must use the covariates to estimate a PS for each patient, or for outcome regression in the case of doubly robust estimation of the causal effect. The standard logistic regression estimation methods rely on parametric assumptions such as the assumption that a PS or outcome regression model can be correctly represented by a logistic linear model with only main terms for each covariate and no interactions. Incorrect causal inferences are expected if these—often arbitrary—modeling assumptions do not hold, for example if the logit link between the linear part of the model and PS is incorrect. Finite sample bias and increased variability can also be expected when a large number of hdPS‐derived covariates are included in the parametric models. To protect against incorrect inferences due to mis‐specified parametric models and to automate dimensionality reduction of the covariate adjustment set (e.g., to reduce collinearity), statistical learning can be used to nonparametrically estimate a PS or outcome regression based on empirically identified and investigator‐specified variables while maintaining explainability using, for example, Shapley Additive Explanation values. SL—an ensemble learning method—is one such approach that was proposed to improve confounding adjustment with hdPS covariates. SL is a data‐adaptive estimation algorithm that combines, through a weighted average, predicted values from a library of candidate learners such as neural networks, random forests, gradient boosting machines, and parametric models—all possible methods of estimating patients' PSs. The selection of the optimal combination of learners is based on crossvalidation to protect against overfitting. The resulting learner (called the “super learner”) is intended to perform asymptotically as well or better (in terms of mean error) than any of the candidate learners considered—and the number of candidate learners can grow as large as is computationally feasible. The practical performance of combining hdPS with SL for confounding adjustment has been illustrated using both real‐world and simulated data. , Future research is needed to evaluate the value of alternate methodologies such as deep learning. Other new directions 6.3.1 Unstructured data hdPS typically works with structured, coded data. However, using natural language processing methods, it is also possible to convert free‐text into tokens, which can stand on their own as potential variables. These data may give additional information beyond what is coded in diagnosis, procedure, medication and other fields, especially when electronic medical records are used as source data. 6.3.2 Continuous covariates and outcomes The Bross formula typically used is intended for use with binary covariates and outcomes, but in many cases, continuous values for one or both may be appropriate. Extensions to the ranking formula can incorporate such continuous values. 6.3.3 Combination matching or weighting methods Most studies that match or weight with a PS do so exclusively with the PS variable. However, it is also possible to match (weight) on specific key investigator‐identified factors, and then match (weight) on a PS. Unstructured data hdPS typically works with structured, coded data. However, using natural language processing methods, it is also possible to convert free‐text into tokens, which can stand on their own as potential variables. These data may give additional information beyond what is coded in diagnosis, procedure, medication and other fields, especially when electronic medical records are used as source data. Continuous covariates and outcomes The Bross formula typically used is intended for use with binary covariates and outcomes, but in many cases, continuous values for one or both may be appropriate. Extensions to the ranking formula can incorporate such continuous values. Combination matching or weighting methods Most studies that match or weight with a PS do so exclusively with the PS variable. However, it is also possible to match (weight) on specific key investigator‐identified factors, and then match (weight) on a PS. CONCLUSION In this article, we provide an overview and guidance on the planning, implementing, and reporting of studies using the hdPS approach in the analysis of healthcare databases, an approach to minimize residual confounding by identifying and adjusting confounding factors or proxies for confounding factors. As illustrated by case examples included in the supplemental materials, a wide range of studies across different data sources have used hdPS over the past decade, and new applications with machine learning techniques are emerging. A basic understanding of the hdPS approach—for both researchers and decision‐makers consuming RWE—and recommendations for the planning, implementation, and reporting of hdPS process are critical for continued generation of transparent and robust RWE. Jeremy A. Rassen drafted the manuscript from contributions from each author. Jeremy A. Rassen, Patrick Blin, Sebastian Kloss, Romain S. Neugebauer, Robert W. Platt, Anton Pottegård, Sebastian Schneeweiss, and Sengwee Toh all critically reviewed and revised the manuscript and gave final approval for publication. Dr. Rassen is an employee of and has an ownership stake in Aetion, Inc. Dr. Kloss is an employee of Johnson & Johnson. Dr. Pottegård is an employee of the University of Southern Denmark, Clinical Pharmacology, Pharmacy and Environmental Medicine. He has participated in studies funded by pharmaceutical companies (Alcon, Almirall, Astellas, Astra‐Zeneca, Boehringer‐Ingelheim, Pfizer, Menarini, Servier, Takeda), with money paid to his employer and with no relation to the work reported in this article. Dr. Schneeweiss is participating in investigator‐initiated grants to the Brigham and Women's Hospital from Boehringer Ingelheim and UCB unrelated to the topic of this study. He is a consultant to Aetion Inc., a software manufacturer of which he owns equity. He is an advisor to Temedica GmbH, a patient‐oriented data generation company. His interests were declared, reviewed, and approved by the Brigham and Women's Hospital in accordance with their institutional compliance policies. Dr. Toh reports being a consultant to Pfizer and Merck on methodological issues unrelated to this study. Drs. Blin, Neugebauer, and Platt report no conflicts of interest. APPENDIX S1: Supporting Information. Click here for additional data file.
|
Is mental health staff training in de-escalation techniques effective in reducing violent incidents in forensic psychiatric settings? – A systematic review of the literature
|
844f65dc-b26a-4c7f-b60e-876055c6a5ed
|
10099889
|
Forensic Medicine[mh]
|
Violence in general and forensic psychiatric settings In mental health services violence is a current and relevant problem for professionals as well as patients . A meta-analysis of 35 international studies including 23,972 inpatients showed that the proportion of patients who committed at least one act of interpersonal violence was 17% . In a recent German study, including 64,367 admissions in psychiatric hospitals, 17,599 aggressive incidents were recorded throughout the year 2019 . This study described that 5084 (7.90%) of the admitted patients showed aggressive behavior towards others. Amongst the 1,660 forensic inpatients included in this study, the proportion of aggressive behavior was even higher (20.54%). At least in Germany, data also suggest an increase of violent incidents in psychiatric hospitals over the last ten years . British authors found that on forensic psychiatric wards there are higher rates of violence compared to general psychiatry . Referring to German psychiatric hospitals, forensic psychiatry had the highest proportion of cases with aggressive behavior (20.54%), but the number of incidents per bed was lower than in general adult psychiatry as well as in child and adolescent psychiatry . Violent behaviour includes verbal and physical threats and aggression that may lead to serious injury or death. The risk of these behaviours is significant in forensic settings. This is due to the complex historical and current psychosocial needs of the patient group . Among other things, the resulting damage includes physical and psychological injuries to fellow patients and staff, diminished therapeutic relationships, lower job-satisfaction of the employees as well as an increase in the number of days of sick leave [ – ]. Restrictive interventions in general and forensic mental health services Although they are untherapeutic, restrictive interventions (i.e. manual restraint, mechanical restraint, seclusion or forced medication) are used in psychiatric hospitals as well as in forensic mental health services in order to manage aggressive behaviour. However, this kind of coercion should exclusively be used if de-escalation and other preventive strategies have failed and there is potential for harm to patients or employees if no action is taken . The use of restrictive interventions is very problematic for patients, staff and organizations . They diminish the therapeutic alliance between staff and patients. Patients experience restrictive interventions as dehumanizing, frightening, confusing and at times painful . Restrictive interventions are associated with anxiety and stress and can cause physical and psychological damage for both patients and staff . Both staff and patients might suffer injury . Moreover, especially mechanical restraint or isolation can be highly traumatic. Occasionally, restrictive measures can not only result in serious physical harm, but even patient deaths (e.g. due to physical restraint) . The concept of de-escalation and de-escalation techniques in the healthcare context The recommended first-line response to potential violence and aggression in healthcare settings is de-escalation [ , , ]. This means that, especially in forensic psychiatric settings, staff need to intervene before situations escalate to a level when there seems to be no other choice but using restrictive interventions to protect themselves as well as the health and lives of their other entrusted patients. Referring to Bowers et al., until 2011 there seemed to be no systematic description of the concept of de-escalation in the healthcare context . Nowadays, terms like “de-escalation”, “de-escalation techniques” or “de-escalation training” seem to be defined rather vaguely, as well. Nevertheless, it isn’t possible to reasonably define the content of de-escalation trainings without clarifying these terms . For the purpose of this review it is therefore necessary to make a serious attempt to establish a working definition of de-escalation techniques. In 2012 Price and Baker strived to clarify what the term “de-escalation techniques” means in current literature . Accordingly, de-escalation techniques are “ a set of therapeutic interventions frequently used to prevent violence and aggression within mental health services” . They describe that de-escalation techniques consist of several “ key components ” . In a thematic literature synthesis Price and Baker found seven themes describing these key components. These themes were related either to staff skills (characteristics of effective de-escalators; maintain personal control; verbal and non-verbal-skills) or the process of intervening (engaging with the patient; when to intervene; ensuring safe conditions for de-escalation and strategies for de-escalation). In 2014, based on the aforementioned description of de-escalation techniques, Price et al. conducted a systematic review about the learning and performance outcome of mental health staff training in de-escalation techniques for the management of violence and aggression . In this review they describe that de-escalation techniques aim to stop the escalation of aggression to either violence or the use of physically restrictive practices via a range of psychosocial techniques. These psychosocial techniques would “ typically involve the use of non-provocative verbal and non-verbal clinician communication to negotiate a mutually agreeable solution to the aggressor’s concerns ” . Referring to Price and Baker Bower developed a simplified and rather linear model portraying “ de-escalation as a process, starting with delimiting the situation, then moving on to clarification of the problem with the patient concerned, followed by reaching a resolution” . According to Bower this process “is only likely to succeed if, at every stage, the de-escalator is controlling their own emotions and expressing respect and empathy for the patient they are seeking to de-escalate “. In 2017, Hallett et al. conducted a concept analysis of de-escalation of aggressive behaviour in healthcare settings . They found that, considering the available literature, de-escalation in healthcare settings could be characterized as “ a collective term for a range of interwoven staff delivered components comprising verbal and non-verbal communication, self-regulation, assessment, actions, and safety maintenance, which aims to extinguish or reduce or reduce patient aggression/agitation irrespective of its cause, and improve staff-patient relationships while eliminating or minimising coercion or restriction.” They also describe that de-escalation “comprises a set of skills, knowledge, and personal features in the domains of communication, self-regulation, assessment, activity, and safety maintenance “. Evidently, there is a great overlap in the aforementioned definitions regarding their key elements in terms of content. However, Hallet et al. as well as Baker use the term “de-escalation”, while Price and Baker refer to “de-escalation techniques”. The current review focuses on the effectiveness of mental health staff training in de-escalation techniques in forensic psychiatric settings. The working definition of “de-escalation techniques” underlying this manuscript corresponds essentially to the aforementioned concept as suggested by Price et al. . Accordingly, de-escalation techniques are regarded as a set of interwoven, (partially) learnable, non-physical, psychosocial techniques with the aim of stopping an impending escalation of inpatient aggression to violence in mental health services. On the one hand, the key components of these de-escalation techniques include themes relating broadly to staff skills. These staff skills include verbal skills (e.g. negotiating, tactful language, using a calm tone of voice, sensitive use of humour), non verbal-skills (e.g. attentive posture and body language, active listening, a certain degree of eye contact), the ability to maintain personal control when faced with inpatient aggression as well as the ability to express a positive, emphatetic, supportive and non authoritarian therapeutic attitude. On the other hand, de-escalation techniques accordingly include themes relating broadly to the process of intervening. This implies the ability to engage with the patient and to make reasonable assessments (e.g. about the necessity and timing of intervening; about what level of staff support is necessary and whether the area is safe). Furthermore, de-escalation strategies are regarded as key components of de-escalation techniques (e.g. shared problem solving, facilitating expression, offering alternatives to aggression, limit-setting). Mental health staff training in de-escalation techniques within the field of general psychiatry Training in de-escalation techniques is often a key feature of complex interventions for reducing restraint and seclusion . For years, staff training including de-escalation components to prevent and reduce verbal and physical aggression has been adopted in mental health settings. These training programs intend to promote prevention, relational security and the de-escalation of conflicts. Several different training programmes are already in use. A concrete example of such a training program is “ProDeMa” (Professional Deescalation Management) . ProDeMa is program that, referring to its authors, is explicitely focused on training a mental health staff on de-escalation techniques. The program was developed in Germany and intends to reduce violent incidents through 7 “de-escalation levels”: Prevention/Reduction of violence through improvements concerning external framework conditions, e.g. aggression inducing ward rules or process flows Change of reaction patterns of the staff through change in interpretation and valuation of inpatient violence Improvement of the staff’s understanding of the etiology of violent behaviour Training staff in verbal de-escalation techniques Teaching staff techniques to escape and defend themselves against physical attacks without harming the patient unnecessarily Techniques to immobilize and restrain patients without doing unnecessary harm to them Professional post-processing of escalations including inter-collegial first aid. All in all, in the field of general psychiatry the available data concerning key outcomes (e.g. assault rate, incidence of aggression, use of physical restraint) are rather mixed. Literature reviews about the effectiveness of de-escalation training respectively training in de-escalation techniques in reducing the use of coercive measures propose that more evidence is needed to evaluate their effectiveness . However, some weak indications for the efficacy of mental health staff training in de-escalation techniques in reducing violent incidents as well as the use of restraint and seclusion have already been found within the field of general psychiatry [ , , ]. For example, a few studies found a significantly reduced risk of physical assaults on ward level [ , – ] respectively a significant reduction of aggressive incidents including verbal aggression and violence towards objects [ , , ]. Implementation of menthal health staff training in de-escalation techniques in forensic psychiatric settings Several authors call for the implementation of similar interventions in forensic psychiatric settings. For example, Bader and Evans state that in order to reduce inpatient violence, training for nursing staff would be as important as direct drug/medical treatment of patients . Barr et al. assert that it is necessary for forensic nurses to develop de-escalation promoting, restrictive practices reducing and recovery-focused care promoting skills . Maguire and colleagues also promote staff training with de-escalation techniques components . Dexter and Vitacco note that, amongst other effective treatment interventions, aggression- and de-escalation training for staff should be implemented in order to prevent violence in forensic hospitals . Goodman et al. precise that successful de-escalation in a high-secure forensic setting needs strong therapeutic relationships and knowledge about the relationship between trauma and aggression . Given the adverse effects of coercive measures on patients, staff and organizations, it seems to be crucial that more evidence in this field is collected and analyzed. Consequently, mental health staff training in de-escalation techniques is being implemented within the field of forensic psychiatry. Yet, there seems to be a crucial lack around training effectiveness. Therefore in this systematic review we will endeavor to present the current evidence for mental health staff training in de-escalation techniques in reducing violent incidents in forensic psychiatric settings.
In mental health services violence is a current and relevant problem for professionals as well as patients . A meta-analysis of 35 international studies including 23,972 inpatients showed that the proportion of patients who committed at least one act of interpersonal violence was 17% . In a recent German study, including 64,367 admissions in psychiatric hospitals, 17,599 aggressive incidents were recorded throughout the year 2019 . This study described that 5084 (7.90%) of the admitted patients showed aggressive behavior towards others. Amongst the 1,660 forensic inpatients included in this study, the proportion of aggressive behavior was even higher (20.54%). At least in Germany, data also suggest an increase of violent incidents in psychiatric hospitals over the last ten years . British authors found that on forensic psychiatric wards there are higher rates of violence compared to general psychiatry . Referring to German psychiatric hospitals, forensic psychiatry had the highest proportion of cases with aggressive behavior (20.54%), but the number of incidents per bed was lower than in general adult psychiatry as well as in child and adolescent psychiatry . Violent behaviour includes verbal and physical threats and aggression that may lead to serious injury or death. The risk of these behaviours is significant in forensic settings. This is due to the complex historical and current psychosocial needs of the patient group . Among other things, the resulting damage includes physical and psychological injuries to fellow patients and staff, diminished therapeutic relationships, lower job-satisfaction of the employees as well as an increase in the number of days of sick leave [ – ].
Although they are untherapeutic, restrictive interventions (i.e. manual restraint, mechanical restraint, seclusion or forced medication) are used in psychiatric hospitals as well as in forensic mental health services in order to manage aggressive behaviour. However, this kind of coercion should exclusively be used if de-escalation and other preventive strategies have failed and there is potential for harm to patients or employees if no action is taken . The use of restrictive interventions is very problematic for patients, staff and organizations . They diminish the therapeutic alliance between staff and patients. Patients experience restrictive interventions as dehumanizing, frightening, confusing and at times painful . Restrictive interventions are associated with anxiety and stress and can cause physical and psychological damage for both patients and staff . Both staff and patients might suffer injury . Moreover, especially mechanical restraint or isolation can be highly traumatic. Occasionally, restrictive measures can not only result in serious physical harm, but even patient deaths (e.g. due to physical restraint) .
The recommended first-line response to potential violence and aggression in healthcare settings is de-escalation [ , , ]. This means that, especially in forensic psychiatric settings, staff need to intervene before situations escalate to a level when there seems to be no other choice but using restrictive interventions to protect themselves as well as the health and lives of their other entrusted patients. Referring to Bowers et al., until 2011 there seemed to be no systematic description of the concept of de-escalation in the healthcare context . Nowadays, terms like “de-escalation”, “de-escalation techniques” or “de-escalation training” seem to be defined rather vaguely, as well. Nevertheless, it isn’t possible to reasonably define the content of de-escalation trainings without clarifying these terms . For the purpose of this review it is therefore necessary to make a serious attempt to establish a working definition of de-escalation techniques. In 2012 Price and Baker strived to clarify what the term “de-escalation techniques” means in current literature . Accordingly, de-escalation techniques are “ a set of therapeutic interventions frequently used to prevent violence and aggression within mental health services” . They describe that de-escalation techniques consist of several “ key components ” . In a thematic literature synthesis Price and Baker found seven themes describing these key components. These themes were related either to staff skills (characteristics of effective de-escalators; maintain personal control; verbal and non-verbal-skills) or the process of intervening (engaging with the patient; when to intervene; ensuring safe conditions for de-escalation and strategies for de-escalation). In 2014, based on the aforementioned description of de-escalation techniques, Price et al. conducted a systematic review about the learning and performance outcome of mental health staff training in de-escalation techniques for the management of violence and aggression . In this review they describe that de-escalation techniques aim to stop the escalation of aggression to either violence or the use of physically restrictive practices via a range of psychosocial techniques. These psychosocial techniques would “ typically involve the use of non-provocative verbal and non-verbal clinician communication to negotiate a mutually agreeable solution to the aggressor’s concerns ” . Referring to Price and Baker Bower developed a simplified and rather linear model portraying “ de-escalation as a process, starting with delimiting the situation, then moving on to clarification of the problem with the patient concerned, followed by reaching a resolution” . According to Bower this process “is only likely to succeed if, at every stage, the de-escalator is controlling their own emotions and expressing respect and empathy for the patient they are seeking to de-escalate “. In 2017, Hallett et al. conducted a concept analysis of de-escalation of aggressive behaviour in healthcare settings . They found that, considering the available literature, de-escalation in healthcare settings could be characterized as “ a collective term for a range of interwoven staff delivered components comprising verbal and non-verbal communication, self-regulation, assessment, actions, and safety maintenance, which aims to extinguish or reduce or reduce patient aggression/agitation irrespective of its cause, and improve staff-patient relationships while eliminating or minimising coercion or restriction.” They also describe that de-escalation “comprises a set of skills, knowledge, and personal features in the domains of communication, self-regulation, assessment, activity, and safety maintenance “. Evidently, there is a great overlap in the aforementioned definitions regarding their key elements in terms of content. However, Hallet et al. as well as Baker use the term “de-escalation”, while Price and Baker refer to “de-escalation techniques”. The current review focuses on the effectiveness of mental health staff training in de-escalation techniques in forensic psychiatric settings. The working definition of “de-escalation techniques” underlying this manuscript corresponds essentially to the aforementioned concept as suggested by Price et al. . Accordingly, de-escalation techniques are regarded as a set of interwoven, (partially) learnable, non-physical, psychosocial techniques with the aim of stopping an impending escalation of inpatient aggression to violence in mental health services. On the one hand, the key components of these de-escalation techniques include themes relating broadly to staff skills. These staff skills include verbal skills (e.g. negotiating, tactful language, using a calm tone of voice, sensitive use of humour), non verbal-skills (e.g. attentive posture and body language, active listening, a certain degree of eye contact), the ability to maintain personal control when faced with inpatient aggression as well as the ability to express a positive, emphatetic, supportive and non authoritarian therapeutic attitude. On the other hand, de-escalation techniques accordingly include themes relating broadly to the process of intervening. This implies the ability to engage with the patient and to make reasonable assessments (e.g. about the necessity and timing of intervening; about what level of staff support is necessary and whether the area is safe). Furthermore, de-escalation strategies are regarded as key components of de-escalation techniques (e.g. shared problem solving, facilitating expression, offering alternatives to aggression, limit-setting).
Training in de-escalation techniques is often a key feature of complex interventions for reducing restraint and seclusion . For years, staff training including de-escalation components to prevent and reduce verbal and physical aggression has been adopted in mental health settings. These training programs intend to promote prevention, relational security and the de-escalation of conflicts. Several different training programmes are already in use. A concrete example of such a training program is “ProDeMa” (Professional Deescalation Management) . ProDeMa is program that, referring to its authors, is explicitely focused on training a mental health staff on de-escalation techniques. The program was developed in Germany and intends to reduce violent incidents through 7 “de-escalation levels”: Prevention/Reduction of violence through improvements concerning external framework conditions, e.g. aggression inducing ward rules or process flows Change of reaction patterns of the staff through change in interpretation and valuation of inpatient violence Improvement of the staff’s understanding of the etiology of violent behaviour Training staff in verbal de-escalation techniques Teaching staff techniques to escape and defend themselves against physical attacks without harming the patient unnecessarily Techniques to immobilize and restrain patients without doing unnecessary harm to them Professional post-processing of escalations including inter-collegial first aid. All in all, in the field of general psychiatry the available data concerning key outcomes (e.g. assault rate, incidence of aggression, use of physical restraint) are rather mixed. Literature reviews about the effectiveness of de-escalation training respectively training in de-escalation techniques in reducing the use of coercive measures propose that more evidence is needed to evaluate their effectiveness . However, some weak indications for the efficacy of mental health staff training in de-escalation techniques in reducing violent incidents as well as the use of restraint and seclusion have already been found within the field of general psychiatry [ , , ]. For example, a few studies found a significantly reduced risk of physical assaults on ward level [ , – ] respectively a significant reduction of aggressive incidents including verbal aggression and violence towards objects [ , , ]. Implementation of menthal health staff training in de-escalation techniques in forensic psychiatric settings Several authors call for the implementation of similar interventions in forensic psychiatric settings. For example, Bader and Evans state that in order to reduce inpatient violence, training for nursing staff would be as important as direct drug/medical treatment of patients . Barr et al. assert that it is necessary for forensic nurses to develop de-escalation promoting, restrictive practices reducing and recovery-focused care promoting skills . Maguire and colleagues also promote staff training with de-escalation techniques components . Dexter and Vitacco note that, amongst other effective treatment interventions, aggression- and de-escalation training for staff should be implemented in order to prevent violence in forensic hospitals . Goodman et al. precise that successful de-escalation in a high-secure forensic setting needs strong therapeutic relationships and knowledge about the relationship between trauma and aggression . Given the adverse effects of coercive measures on patients, staff and organizations, it seems to be crucial that more evidence in this field is collected and analyzed. Consequently, mental health staff training in de-escalation techniques is being implemented within the field of forensic psychiatry. Yet, there seems to be a crucial lack around training effectiveness. Therefore in this systematic review we will endeavor to present the current evidence for mental health staff training in de-escalation techniques in reducing violent incidents in forensic psychiatric settings.
Several authors call for the implementation of similar interventions in forensic psychiatric settings. For example, Bader and Evans state that in order to reduce inpatient violence, training for nursing staff would be as important as direct drug/medical treatment of patients . Barr et al. assert that it is necessary for forensic nurses to develop de-escalation promoting, restrictive practices reducing and recovery-focused care promoting skills . Maguire and colleagues also promote staff training with de-escalation techniques components . Dexter and Vitacco note that, amongst other effective treatment interventions, aggression- and de-escalation training for staff should be implemented in order to prevent violence in forensic hospitals . Goodman et al. precise that successful de-escalation in a high-secure forensic setting needs strong therapeutic relationships and knowledge about the relationship between trauma and aggression . Given the adverse effects of coercive measures on patients, staff and organizations, it seems to be crucial that more evidence in this field is collected and analyzed. Consequently, mental health staff training in de-escalation techniques is being implemented within the field of forensic psychiatry. Yet, there seems to be a crucial lack around training effectiveness. Therefore in this systematic review we will endeavor to present the current evidence for mental health staff training in de-escalation techniques in reducing violent incidents in forensic psychiatric settings.
In conducting this review we have followed the PRISMA guidelines for reporting systematic reviews . Search strategy We conducted a systematic literature search of publications from 2002 (the year ProDeMa was developed) up until December 2021.The search included the electronic data bases Cochrane Library, Ovid PsycInfo, PubMed, Science direct, Scopus and Web of Science. We combined search terms capturing forensic settings with various terms relating to health care professionals as well as deescalation. The full search strategy is included in the . In- and exclusion criteria Our selection included studies related to the evaluation/assessment of a staff training program to reduce violent incidents in forensic psychiatric hospitals. Particular emphasis was placed on mental health staff training referring to de-escalation techniques regarding the research question "Is mental health staff training in de-escalation techniques effective in reducing violent incidents in forensic psychiatric settings?". Inclusion criteria Studies of any type of design were included if they met the following criteria: Original research Studies in which staff training with a de-escalation techniques component was investigated Studies conducted in forensic mental health settings Human participants of all ages in forensic mental health settings Male and/or female participants Any number of participants Studies in all languages and from all countries Exclusion criteria Conducted in general psychiatric hospitals Training without de-escalation elements or attitudinal component Non-primary research, i.e. reviews, opinions, discussion papers
We conducted a systematic literature search of publications from 2002 (the year ProDeMa was developed) up until December 2021.The search included the electronic data bases Cochrane Library, Ovid PsycInfo, PubMed, Science direct, Scopus and Web of Science. We combined search terms capturing forensic settings with various terms relating to health care professionals as well as deescalation. The full search strategy is included in the .
Our selection included studies related to the evaluation/assessment of a staff training program to reduce violent incidents in forensic psychiatric hospitals. Particular emphasis was placed on mental health staff training referring to de-escalation techniques regarding the research question "Is mental health staff training in de-escalation techniques effective in reducing violent incidents in forensic psychiatric settings?". Inclusion criteria Studies of any type of design were included if they met the following criteria: Original research Studies in which staff training with a de-escalation techniques component was investigated Studies conducted in forensic mental health settings Human participants of all ages in forensic mental health settings Male and/or female participants Any number of participants Studies in all languages and from all countries Exclusion criteria Conducted in general psychiatric hospitals Training without de-escalation elements or attitudinal component Non-primary research, i.e. reviews, opinions, discussion papers
Studies of any type of design were included if they met the following criteria: Original research Studies in which staff training with a de-escalation techniques component was investigated Studies conducted in forensic mental health settings Human participants of all ages in forensic mental health settings Male and/or female participants Any number of participants Studies in all languages and from all countries
Conducted in general psychiatric hospitals Training without de-escalation elements or attitudinal component Non-primary research, i.e. reviews, opinions, discussion papers
Search results The initial searches returned 15398 potentially relevant titles. Results of the searches were reviewed independently by authors PG and DB for suitability for inclusion in the review against the criteria set out below. This was initially undertaken through inspection of titles and abstracts. A second review appraising the full papers was then undertaken as required. In the event of a difference of opinion over a paper’s suitability for inclusion a third author (BV) was consulted. Additionally, authors DB and PG searched reference lists from both included and excluded studies for further suitable papers for inclusion. Using this approach, one more suitable study was found. A total of 174 papers were shortlisted because they seemed to describe studies in a forensic psychiatric hospital having deescalation training as a topic or being about describing the type, severity, frequency or reduction of violent incidents. After screening the abstracts 145 of these papers were excluded because they didn’t fit our selection criteria. Thirty full texts were finally screened. 5 papers fulfilled our selection criteria and were consequently included in this review. A flow chart of our search results is set out in Fig. . Details of the studies are shown in Table . Description of study findings The number of studies examining the effects of staff training in de-escalation techniques in forenic psychiatric settings was very limited. Five studies were finally deemed relevant for this review. The considered studies took place in hospitals in Israel (2), Norway (1), UK (1) and Australia (1). None of the studies was a RCT. One study was designed as a one-group-posttest-only. 4 studies were designed as before-and-after-comparisons-without-control-group. The number of participants per rating ranged from 8 to 112. The training periods lasted from 0,5 days to 3 weeks. Nesset and colleagues conducted a pilot study in a Norwegian forensic psychiatric hospital consisting of 16 beds in order to investigate whether a nursing staff training program improves the ward atmosphere and patient satisfaction . The 3 weeks staff training taught issues around de-escalation techniques using lectures as well as role plays. Week one focused on principles of milieu therapy, week 2 on how the nature of work in forensic psychiatry affects the nursing staff emotionally and how the staff could contain aggressive feelings from the patients. In week 3 setting limits was practiced, e. g. in role plays. After the intervention, nursing staff received no further teaching but weekly supervision continued and themes from the staff training program became a common element in these supervisions. The perception of the treatment atmosphere was measured by the revised Ward Atmosphere Scale (WAS-R) at three time points: before, immediately after and six months after the intervention . The WAS-R is a self-report questionnaire including 11 subscales of which one measures the perception of angry and aggressive behavior displayed by the patients. Patients and staff reported a significantly lower level in the WAS-subscale “angry and aggressive behavior” after the intervention. The authors concluded that it might be possible to effectively improve the ward atmosphere through conducting a nursing staff training program . However, besides the small sample size and the absence of a control group an important limitation of this study is that it didn’t evaluate explicitely whether the frequency (as opposed to the subjective assessment of patients and staff) of violent incidents actually reduced. Martin and Daffern conducted a one-group-post-test-only study evaluating clinician perceptions of personal safety and confidence to manage inpatient aggression in a forensic psychiatric setting following a staff training programme with a de-escalation techniques component, called M4 (“Managing the team, Managing the environment, Managing the patient and Managing aggression”). M4 consists of a 2-day workshop including theoretical (“organizational incidence and patterns of aggression, risk assessment, legal framework, therapeutic culture, crisis communication and deescalation skills, pharmacology, therapeutic interventions, critical incident stress management”) and practical elements (“self-defense and constraint”). All newly-appointed clinicians had to attend the workshop. After that, they were obliged to attend at least three refresher sessions (1,5 h each) per year. The main measured outcome results of the study were clinician perceptions of personal safety and confidence to manage inpatient aggression. These parameters were measured using a self-report questionnaire based on Thackrey’s “Confidence in Coping with Patient Aggression Instrument” . Clinicians reported the hospital as safe and found themselves relatively confident concerning their ability to manage aggressive patient behavior. Besides this, staff training on aggression management was reported as the most supportive factor on confidence in managing aggression. Whether the clinicians confidence in management inpatient aggression as well as the percepted personal safety translate into an actual reduction in incidents can, however, not be concluded reliably from this study. Neither is it possible to determine whether this positive assessment was objectively related to the training programme. Other limitations of the study were the small sample size, the absence of a control group as well as employing a questionnaire that was not validated. Davies and colleagues investigated the effectiveness of multiprofessional staff training (79 trainees) in “positive behavioral support” (PBS) in increasing staff confidence and changing attributions of challenging inpatient behavior in a medium secure forensic mental health service in Wales, UK . The training package around PBS included identifying primary and secondary (violence) prevention strategies. Methodologically the study was designed as a before and after comparison without control group. PBS includes de-escalation techniques such as verbal-deescalation and prevention of challenging behaviors. It can be described as a non-aversive approach to preventing and managing challenging behavior (e. g. aggressive/violent behavior of patients) through increasing the confidence of staff in their own abilities dealing with aggressive patients. Training for qualified staff took one day. It covered theoretical content as well as the practice of associated skills such as identifying primary and secondary prevention strategies of challenging behavior. Training for unqualified staff was limited to half-a-day and covered primarily theoretical aspects. To evaluate the effectiveness of the staff training program Davies et al. used self-report questionnaires. To measure the staff’s confidence an adapted version of Thackrey’s “Confidence in Coping with Patient Aggression Instrument” was used. The staff's attributions of challenging inpatient behavior were measured using the “Challenging Behavior Attribution Scale” and the “Causal Dimension Scale “. After the intervention the confidence in working with challenging inpatient behavior increased significantly for both qualified and unqualified staff. Particularly for qualified staff, attribution of challenging behavior to external causes increased as well. It could be hypothesized that the staff’s confidence and attribution changes might have de-escalating effects and thus a violence reducing effect on the wards. However, an important limitation of the study is the fact that it didn’t focus on this concrete aspect. There is no evaluation whether the effects extend to objective data, such as numbers of actual incidents. Furthermore, there was no control group. Isaak and colleagues examined the effectiveness of a 3-day intervention program (“Return home safely”) in a before and after comparison without control group in a high-secure forensic psychiatric (total of 132 beds) setting in Israel . The training program was designed to enhance unit safety climate, to reduce patient violence and employee risk of injury from patient violence. The program contains several elements referring to the definition of de-escalation techniques as mentioned in the text above . Day one focusses on personal safety (i.e. how to avoid dangerous situations, self-defense skills, methods for safely restraining patients). Day 2 is mainly about tools for successful inter-staff communication. One day 3 staff issues around organizational learning are addressed (i.e. how to conduct incident investigations after adverse events). The outcome measures consisted of a questionnaire, recording of violent incidents and staff injuries. The 21-item safety climate questionnaire distributed to hospital staff immediately before the workshop and again after 6 months contained 3 safety climate measures, i.e. communication about safety issues, procedures and safety reporting and perceived management commitment to safety. Following the training there was a significant improvement in perceived management commitment to safety as well as a marginally significant improvement in communication about safety issues as well as in procedures and safety reporting. The number of violent incidents and staff injuries also decreased significantly. Before the intervention program about 31 aggressive incidents toward staff were reported annually on average during the period from 2004 to 2007. After the intervention program about 15 aggressive incidents toward staff were reported annually on average during the period from 2008 to 2013. An important limitation of the study is its design, especially the absence of a control group. The same group evaluated the (long-term) effectiveness of annual refresher training sessions of the training program at reducing critical incidents (e.g. physical aggression towards staff) . The authors found that the rate of incidents in the years 2009 to 2017 was kept low in comparison to the pre-intervention years. About 12 aggressive incidents toward staff were reported annually on average during the period from 2009 to 2017. Again, important limitations of the study primarily are the small sample size and the absence of a control group.
The initial searches returned 15398 potentially relevant titles. Results of the searches were reviewed independently by authors PG and DB for suitability for inclusion in the review against the criteria set out below. This was initially undertaken through inspection of titles and abstracts. A second review appraising the full papers was then undertaken as required. In the event of a difference of opinion over a paper’s suitability for inclusion a third author (BV) was consulted. Additionally, authors DB and PG searched reference lists from both included and excluded studies for further suitable papers for inclusion. Using this approach, one more suitable study was found. A total of 174 papers were shortlisted because they seemed to describe studies in a forensic psychiatric hospital having deescalation training as a topic or being about describing the type, severity, frequency or reduction of violent incidents. After screening the abstracts 145 of these papers were excluded because they didn’t fit our selection criteria. Thirty full texts were finally screened. 5 papers fulfilled our selection criteria and were consequently included in this review. A flow chart of our search results is set out in Fig. . Details of the studies are shown in Table .
The number of studies examining the effects of staff training in de-escalation techniques in forenic psychiatric settings was very limited. Five studies were finally deemed relevant for this review. The considered studies took place in hospitals in Israel (2), Norway (1), UK (1) and Australia (1). None of the studies was a RCT. One study was designed as a one-group-posttest-only. 4 studies were designed as before-and-after-comparisons-without-control-group. The number of participants per rating ranged from 8 to 112. The training periods lasted from 0,5 days to 3 weeks. Nesset and colleagues conducted a pilot study in a Norwegian forensic psychiatric hospital consisting of 16 beds in order to investigate whether a nursing staff training program improves the ward atmosphere and patient satisfaction . The 3 weeks staff training taught issues around de-escalation techniques using lectures as well as role plays. Week one focused on principles of milieu therapy, week 2 on how the nature of work in forensic psychiatry affects the nursing staff emotionally and how the staff could contain aggressive feelings from the patients. In week 3 setting limits was practiced, e. g. in role plays. After the intervention, nursing staff received no further teaching but weekly supervision continued and themes from the staff training program became a common element in these supervisions. The perception of the treatment atmosphere was measured by the revised Ward Atmosphere Scale (WAS-R) at three time points: before, immediately after and six months after the intervention . The WAS-R is a self-report questionnaire including 11 subscales of which one measures the perception of angry and aggressive behavior displayed by the patients. Patients and staff reported a significantly lower level in the WAS-subscale “angry and aggressive behavior” after the intervention. The authors concluded that it might be possible to effectively improve the ward atmosphere through conducting a nursing staff training program . However, besides the small sample size and the absence of a control group an important limitation of this study is that it didn’t evaluate explicitely whether the frequency (as opposed to the subjective assessment of patients and staff) of violent incidents actually reduced. Martin and Daffern conducted a one-group-post-test-only study evaluating clinician perceptions of personal safety and confidence to manage inpatient aggression in a forensic psychiatric setting following a staff training programme with a de-escalation techniques component, called M4 (“Managing the team, Managing the environment, Managing the patient and Managing aggression”). M4 consists of a 2-day workshop including theoretical (“organizational incidence and patterns of aggression, risk assessment, legal framework, therapeutic culture, crisis communication and deescalation skills, pharmacology, therapeutic interventions, critical incident stress management”) and practical elements (“self-defense and constraint”). All newly-appointed clinicians had to attend the workshop. After that, they were obliged to attend at least three refresher sessions (1,5 h each) per year. The main measured outcome results of the study were clinician perceptions of personal safety and confidence to manage inpatient aggression. These parameters were measured using a self-report questionnaire based on Thackrey’s “Confidence in Coping with Patient Aggression Instrument” . Clinicians reported the hospital as safe and found themselves relatively confident concerning their ability to manage aggressive patient behavior. Besides this, staff training on aggression management was reported as the most supportive factor on confidence in managing aggression. Whether the clinicians confidence in management inpatient aggression as well as the percepted personal safety translate into an actual reduction in incidents can, however, not be concluded reliably from this study. Neither is it possible to determine whether this positive assessment was objectively related to the training programme. Other limitations of the study were the small sample size, the absence of a control group as well as employing a questionnaire that was not validated. Davies and colleagues investigated the effectiveness of multiprofessional staff training (79 trainees) in “positive behavioral support” (PBS) in increasing staff confidence and changing attributions of challenging inpatient behavior in a medium secure forensic mental health service in Wales, UK . The training package around PBS included identifying primary and secondary (violence) prevention strategies. Methodologically the study was designed as a before and after comparison without control group. PBS includes de-escalation techniques such as verbal-deescalation and prevention of challenging behaviors. It can be described as a non-aversive approach to preventing and managing challenging behavior (e. g. aggressive/violent behavior of patients) through increasing the confidence of staff in their own abilities dealing with aggressive patients. Training for qualified staff took one day. It covered theoretical content as well as the practice of associated skills such as identifying primary and secondary prevention strategies of challenging behavior. Training for unqualified staff was limited to half-a-day and covered primarily theoretical aspects. To evaluate the effectiveness of the staff training program Davies et al. used self-report questionnaires. To measure the staff’s confidence an adapted version of Thackrey’s “Confidence in Coping with Patient Aggression Instrument” was used. The staff's attributions of challenging inpatient behavior were measured using the “Challenging Behavior Attribution Scale” and the “Causal Dimension Scale “. After the intervention the confidence in working with challenging inpatient behavior increased significantly for both qualified and unqualified staff. Particularly for qualified staff, attribution of challenging behavior to external causes increased as well. It could be hypothesized that the staff’s confidence and attribution changes might have de-escalating effects and thus a violence reducing effect on the wards. However, an important limitation of the study is the fact that it didn’t focus on this concrete aspect. There is no evaluation whether the effects extend to objective data, such as numbers of actual incidents. Furthermore, there was no control group. Isaak and colleagues examined the effectiveness of a 3-day intervention program (“Return home safely”) in a before and after comparison without control group in a high-secure forensic psychiatric (total of 132 beds) setting in Israel . The training program was designed to enhance unit safety climate, to reduce patient violence and employee risk of injury from patient violence. The program contains several elements referring to the definition of de-escalation techniques as mentioned in the text above . Day one focusses on personal safety (i.e. how to avoid dangerous situations, self-defense skills, methods for safely restraining patients). Day 2 is mainly about tools for successful inter-staff communication. One day 3 staff issues around organizational learning are addressed (i.e. how to conduct incident investigations after adverse events). The outcome measures consisted of a questionnaire, recording of violent incidents and staff injuries. The 21-item safety climate questionnaire distributed to hospital staff immediately before the workshop and again after 6 months contained 3 safety climate measures, i.e. communication about safety issues, procedures and safety reporting and perceived management commitment to safety. Following the training there was a significant improvement in perceived management commitment to safety as well as a marginally significant improvement in communication about safety issues as well as in procedures and safety reporting. The number of violent incidents and staff injuries also decreased significantly. Before the intervention program about 31 aggressive incidents toward staff were reported annually on average during the period from 2004 to 2007. After the intervention program about 15 aggressive incidents toward staff were reported annually on average during the period from 2008 to 2013. An important limitation of the study is its design, especially the absence of a control group. The same group evaluated the (long-term) effectiveness of annual refresher training sessions of the training program at reducing critical incidents (e.g. physical aggression towards staff) . The authors found that the rate of incidents in the years 2009 to 2017 was kept low in comparison to the pre-intervention years. About 12 aggressive incidents toward staff were reported annually on average during the period from 2009 to 2017. Again, important limitations of the study primarily are the small sample size and the absence of a control group.
iscussion This is the first systematic literature review examining the effectiveness of mental health staff training in de-escalation techniques in reducing violent incidents in forensic psychiatric settings. Unfortunately, inter alia due to the small number of relevant studies and their methodological weaknesses, only tentative conclusions can be drawn. The evidence base concerning the effectiveness of mental health staff training in de-escalation techniques in reducing violent incidents in forensic hospitals turned out to be poor. Despite employing an extensive search strategy, in the field of forensic psychiatry we only found 5 relevant studies meeting our inclusion criteria. The studies were methodologically rather weak, not employing a randomized controlled design. Reliance on before and after comparisons without a control group limits the confidence in the reported findings, e. g. of the differences between trained and untrained groups . In addition, the number of participants was quite small with a range from 8 to 112 participants. Only 2 of the 6 studies [43, 45 used “key safety outcomes” such as rates or severity of violence, aggression, injuries or physical restraint. Two studies reported a significantly reduced number of aggressive incidents towards staff as well as a reduced number of employees injured after the staff training intervention . The remaining 3 studies found indirect indications for the effectiveness of staff training in de-escalation techniques in reducing violent incidents in forensic psychiatric settings, such as a lower level of perceived aggressive inpatient behavior , a significant increase of the staff's confidence in working with challenging inpatient behavior , or an increase in confidence in dealing with aggressive patients as well as in the perception of safety . These studies mainly relied on surveys focusing on self-reported measures regarding the ability to de-escalate situations or the subjective perception of aggressive behavior on the wards. Whether this would also translate into effects on actual behavior and a concrete reduction in the number of violent incidents in those settings remains unclear. We found a noteworthy variation across training programs in terms of topics covered as well as a considerably different number of training days (dosage). This makes it difficult to generalize findings across different studies employing different training programs. Participants might tend to answer surveys in the direction of social desirability causing overestimates in the positive effects of training on domains assessed through trainees’ self-reports . Only one study evaluated long term effects . In conclusion, the findings of this review remain limited. Therefore only tentative conclusions can be drawn as to what extent de-escalation training leads to increased confidence in staff dealing with aggressive incidents and possibly even to a reduction in aggressive incidents in forensic psychiatry. However, from a clinical point of view it seems quite obvious that staff in forensic mental health settings need to be trained in de-escalation as this can be regarded as one of the core aspects of the profession. Future research on this topic seems to be absolutely necessary and should also focus on how this training can best be given in terms of time, form and content, i.e. which components should the training have and in which form and how often should it be delivered. Objective key outcomes like assaults on staff and other patients, injuries of staff and patients, inpatient verbal aggression and violence towards objects as well as use of physical restraints mustn ‘t be neglected. Subjective measures like job satisfaction and the subjective sense of security on the part of staff and inpatients should be evaluated, as well. Of course, it would be necessary to find out how long the effects last. Further research with observation as a method of data collection should especially focus on the effect of staff training in the level of objective and subjective competence of professionals as this probably reflects its most direct effect. Noteworthy is that conducting this review it turned out to be quite demanding to find robust definitions for relevant terms like de-escalation, de-escalation training and de-escalation techniques. The preexisting definitions are questionable. The current review adopted the definition of “de-escalation techniques” in accordance with Price and Baker . However, some doubts might arise whether the given definition of de-escalation techniques reflects the complexity of the relational and temporal context in which de-escalation is used. Compounding this problem is the fact that mental health staff training, even if they seem to focus on de-escalation or de-escalation techniques, usually consist of several other impact factors, as well. Even „ProDeMa “, a program predominantly focused on training a mental health staff on de-escalation techniques, apparently contains impact factors that go beyond the preexisting defintions of de-escalation (e.g. intercollegial first-aid). In the field of general psychiatry, Hirsch et al. conducted a systematic review of the literature focusing on the efficacy of measures to avoid coercion in general . Unlike Price et al., whose paper inspired the current review , Hirsch et al. didn’t limit their review to interventions including de-escalation components. They found that complex intervention programs seem to be particularly effective . To conclude, for future research in the field of forensic psychiatry the limited outcome of this review with regards to clinical implications indicates that a more comprehensive approach might prove worthwhile. Aggression obviously occurs as a a result of a lot of different factors. This seems to make it difficult to find a convincing effect of one single variable, for example de-escalation skills of professionals. More precisely, conducting a further review dispensing with reference to de-escalation (techniques) and instead integrating a wide range of complex interventions (including “Safewards” and “Six Core Strategies”) aiming to reduce verbal and physical patient aggression as well as restrictive interventions might be an effective approach.
Additional file 1: Appendices .
|
Exploring medical students’ perceptions of family medicine in Kyrgyzstan: a mixed method study
|
c328fc94-1189-4399-bc32-f5e7626f1ea5
|
10099892
|
Family Medicine[mh]
|
Primary health care (PHC) was put forward 42 years ago with the Alma Ata declaration as a set of values, principles and approaches aimed at raising the level of health in disadvantaged populations. In 2018, the Astana declaration renewed these key principles as a driving force for achieving the Sustainable Development Goals (SDGs). Evidence has shown that countries reorienting their health systems towards PHC are better placed to achieve the SDGs than those with hospital focused system. Developing stronger PHC with General Practitioners /Family Medicine doctors is linked to better outcomes, lower costs, and improved health equity. For the purpose of this paper, family medicine (FM) doctors will be used as equivalent to general practitioner as this is the term commonly referred to in Kyrgyzstan. Despite clear progress, the development of FM continues to face wide range of challenges. The World Health Organization (WHO) alerts on the projected shortfall of 18 million health workers, primarily in low- and lower-middle- income countries (LMIC), by 2030. This shortage will have serious implications for the health of billions of people across all regions of the world if not addressed. The global human resource (HR) challenges described above echoes the situation in Kyrgyzstan a landlock country in Central Asia. In Kyrgyzstan, FM doctors represent about 16% of doctors corresponding to a medical density of 24.7/10,000 population, while WHO recommends 44.5/10,000. The current deficit especially impacts rural areas, where the few remaining FM doctors are either beyond or near retirement age. Nevertheless, PHC remains the first point of entry into the healthcare system for most people in Kyrgyzstan. Besides the lack of FM already practicing in the health system, this specialty is not well-recognized and valued by the Kyrgyz population and is unpopular for medical students leading to very few young doctors deciding to follow this professional track. Since its independence from the former Soviet Union in 1991, Kyrgyzstan has embarked on a major healthcare reform, reducing the overall hospital capacity, moving towards more ambulatory care, retraining and developing a stronger PHC base with FM doctors. Since 2007, The Geneva University Hospitals (HUG) and the Unit of Development and Research in Medical Education (UDREM) at the University of Geneva have been providing technical support for medical training through the Medical Education Reform (MER) project financed by the Swiss Agency for Development and Cooperation (SDC). The main goals of this project are to improve the quality of the pre-graduate, post-graduate and continuing medical education program by strengthening the instructional and organizational aspects of the curriculum; introducing more interactive teaching methods and active clinical experiences and practice to students; improving the students’ assessment system leading to a national certification examination, and reinforcing the priority towards FM. While global efforts to develop FM have been gaining momentum over the past decade with several studies undertaken in the high income countries . Few analyses focus on the factors influencing medical students’ specialty choice in LMICs and perception of FM. This study aims to explore how Kyrgyz medical students perceive FM and the factors that may influence their choice of specialty.
Study design and setting This study is a cross-sectional explanatory sequential design, including quantitative survey and focus group discussions that were carried out at the Kyrgyz State Medical Academy (KSMA) in Bishkek in 2017. ( Fig. – flow chart) . Context The specific situation of Kyrgyzstan stemmed from a previous Soviet system which favoured specialties and sub-specialties. The medical education system is still very much influenced by specialists, with very little recognition of FM. The pre-graduate medical curriculum is exclusively taught by specialists, who do not have a clear idea of what FM is. At national level, in 2013, 15 residents opted for the FM speciality, versus 11 in 2014, 10 in 2015. (MER-Project Data). The promotion of FM and increase in the number of FM doctors has become a key priority for the Ministry of Health (MOH) and results started to show increase from 2016. (Fig. - Timeline). Medical studies in Kyrgyzstan are 6 years long. The KSMA curriculum reform was initiated in 2012 and the first cohort of students to graduate with the revised program focusing on FM took place in June 2018. In parallel, the post-graduate training for FM specialty was reformed and a new two-year residency training was introduced in 2018 (instead of the initial one-year residency training), with 150 positions for FM available out of 894 residency positions throughout the country. The post-graduate training for the other specialties lasts at least three years. Participants The target group were medical undergraduate students at KSMA registered at university in 2017/2018 in year 1, 4 and 6 representing a total cohort of 1449 students. Ethics approval Prior to any data collection, the study was submitted in 2017 to the Ethics Committees in both Geneva and Bishkek (Commission Cantonale d’Ethique de la Recherche (CCER) in Geneva and the KSMA Ethical board in Bishkek), who designated the study as exempted from formal review. Instrument and data collection The data collection tools (survey and interview guide for the focus group discussions) were translated into Russian and back-translated into English to ensure the translation’s quality (Additional file 1). A pilot test was carried out to estimate the duration of survey administration and to provide guidance for the data collection team in the planning. The data collection took place from June to December 2017. The Kyrgyz partners (ZI, DM, NB) with support from the Head of the Educational Department at KSMA took care of organization and data collection. Clearance from the administration of KSMA and the rector were obtained prior to the data collection. The first author went on site to moderate the focus group discussions in November 2017. Quantitative survey For maximum participation, national partners introduced the survey’s objectives to the students during a lecture preceding the survey. Students were invited to sign an informed consent form prior to survey’s administration, which took place during an assigned class on computers in a centre equipped with 65 computers. The survey was self-administered at three key training stages; (year 1, start of the pre-clinical teaching; year 4, between pre-clinical and clinical teaching; year 6, fully clinical teaching) and students took on average 20 min to fill them. The survey was adapted from an existing survey used in the Republic of Tajikistan in 2014 for a similar project. In order to enable comparison, the survey was only adapted when necessary to reflect the Kyrgyz context, such as name of curriculum and modules, and factors influencing specialty choice… The survey was divided into 6 sections: (1) Socio-demographic data (10 items) (2) Choice of specialty (7 items) and influencing factors (12 items) (3) Perception of FM specialty (11 items) and in comparison, with other specialties (6 items) (4) Type of comments about FM (5 items) (5) Perception of the quality and impact of the training in medical school on perception of FM (6) Perception of post-graduate training (3 items) given to year 6 students only. Focus group discussion The focus group participants were selected from the participants who responded to the survey in order to be as representative as possible of the whole student population with regards to their origin (urban or rural), gender and mode of financing their medical education (government subsidized or private students). Six focus groups (2 per study year) were organized with 5–9 students per group and consisted in in-depth discussions about 3 themes based on the preliminary results of phase 1: (a) the factors that influenced their specialty choice (students of year 6 only), (b) their views on specializing in family medicine in comparison with other specialties, (c) their views on the new curriculum and their recommendations. To stimulate the discussion about the image of Family Medicine compared to other specialties, participants were asked to rank a set of 13 specialties including family medicine doctors according to their level of difficulty, attractiveness and prestige. This was done by sorting cards into piles. This approach allowed access to people’s perceptions and invites the participants to structure and justify their representations.. Students organized the cards on a range of 4 to 10 levels and the level of ranking of FM was standardized on a maximum of 10 to allow comparison per year and per perception (for example if FM was classified on the 4th level out of 5, it was then standardized as 8 out of 10). Data management and analysis Data from the questionnaires were imported into an excel spreadsheet to enable descriptive statistics to be drawn from students’ answers. Focus group discussions were recorded, transcribed verbatim by the local investigator (ZI) and translated from Russian into English. The transcripts were analysed using the MAXQDA 2018 programme in a deductive approach using a framework based on 7 factors identified from a qualitative systematic review by Olid et al. looking at students’ attitudes and perceptions towards FM. (Table ) The final analysis consisted in aligning findings from the survey and from the FGD in each of the 7 Olid et al. themes. Findings were then synthetized and interpreted for each theme. In addition, new emerging themes were extracted and representative quotes were chosen for each theme. (Table ) The label following the quotes indicates the number of the FG and the year of study.
This study is a cross-sectional explanatory sequential design, including quantitative survey and focus group discussions that were carried out at the Kyrgyz State Medical Academy (KSMA) in Bishkek in 2017. ( Fig. – flow chart) .
The specific situation of Kyrgyzstan stemmed from a previous Soviet system which favoured specialties and sub-specialties. The medical education system is still very much influenced by specialists, with very little recognition of FM. The pre-graduate medical curriculum is exclusively taught by specialists, who do not have a clear idea of what FM is. At national level, in 2013, 15 residents opted for the FM speciality, versus 11 in 2014, 10 in 2015. (MER-Project Data). The promotion of FM and increase in the number of FM doctors has become a key priority for the Ministry of Health (MOH) and results started to show increase from 2016. (Fig. - Timeline). Medical studies in Kyrgyzstan are 6 years long. The KSMA curriculum reform was initiated in 2012 and the first cohort of students to graduate with the revised program focusing on FM took place in June 2018. In parallel, the post-graduate training for FM specialty was reformed and a new two-year residency training was introduced in 2018 (instead of the initial one-year residency training), with 150 positions for FM available out of 894 residency positions throughout the country. The post-graduate training for the other specialties lasts at least three years.
The target group were medical undergraduate students at KSMA registered at university in 2017/2018 in year 1, 4 and 6 representing a total cohort of 1449 students.
Prior to any data collection, the study was submitted in 2017 to the Ethics Committees in both Geneva and Bishkek (Commission Cantonale d’Ethique de la Recherche (CCER) in Geneva and the KSMA Ethical board in Bishkek), who designated the study as exempted from formal review.
The data collection tools (survey and interview guide for the focus group discussions) were translated into Russian and back-translated into English to ensure the translation’s quality (Additional file 1). A pilot test was carried out to estimate the duration of survey administration and to provide guidance for the data collection team in the planning. The data collection took place from June to December 2017. The Kyrgyz partners (ZI, DM, NB) with support from the Head of the Educational Department at KSMA took care of organization and data collection. Clearance from the administration of KSMA and the rector were obtained prior to the data collection. The first author went on site to moderate the focus group discussions in November 2017. Quantitative survey For maximum participation, national partners introduced the survey’s objectives to the students during a lecture preceding the survey. Students were invited to sign an informed consent form prior to survey’s administration, which took place during an assigned class on computers in a centre equipped with 65 computers. The survey was self-administered at three key training stages; (year 1, start of the pre-clinical teaching; year 4, between pre-clinical and clinical teaching; year 6, fully clinical teaching) and students took on average 20 min to fill them. The survey was adapted from an existing survey used in the Republic of Tajikistan in 2014 for a similar project. In order to enable comparison, the survey was only adapted when necessary to reflect the Kyrgyz context, such as name of curriculum and modules, and factors influencing specialty choice… The survey was divided into 6 sections: (1) Socio-demographic data (10 items) (2) Choice of specialty (7 items) and influencing factors (12 items) (3) Perception of FM specialty (11 items) and in comparison, with other specialties (6 items) (4) Type of comments about FM (5 items) (5) Perception of the quality and impact of the training in medical school on perception of FM (6) Perception of post-graduate training (3 items) given to year 6 students only. Focus group discussion The focus group participants were selected from the participants who responded to the survey in order to be as representative as possible of the whole student population with regards to their origin (urban or rural), gender and mode of financing their medical education (government subsidized or private students). Six focus groups (2 per study year) were organized with 5–9 students per group and consisted in in-depth discussions about 3 themes based on the preliminary results of phase 1: (a) the factors that influenced their specialty choice (students of year 6 only), (b) their views on specializing in family medicine in comparison with other specialties, (c) their views on the new curriculum and their recommendations. To stimulate the discussion about the image of Family Medicine compared to other specialties, participants were asked to rank a set of 13 specialties including family medicine doctors according to their level of difficulty, attractiveness and prestige. This was done by sorting cards into piles. This approach allowed access to people’s perceptions and invites the participants to structure and justify their representations.. Students organized the cards on a range of 4 to 10 levels and the level of ranking of FM was standardized on a maximum of 10 to allow comparison per year and per perception (for example if FM was classified on the 4th level out of 5, it was then standardized as 8 out of 10).
For maximum participation, national partners introduced the survey’s objectives to the students during a lecture preceding the survey. Students were invited to sign an informed consent form prior to survey’s administration, which took place during an assigned class on computers in a centre equipped with 65 computers. The survey was self-administered at three key training stages; (year 1, start of the pre-clinical teaching; year 4, between pre-clinical and clinical teaching; year 6, fully clinical teaching) and students took on average 20 min to fill them. The survey was adapted from an existing survey used in the Republic of Tajikistan in 2014 for a similar project. In order to enable comparison, the survey was only adapted when necessary to reflect the Kyrgyz context, such as name of curriculum and modules, and factors influencing specialty choice… The survey was divided into 6 sections: (1) Socio-demographic data (10 items) (2) Choice of specialty (7 items) and influencing factors (12 items) (3) Perception of FM specialty (11 items) and in comparison, with other specialties (6 items) (4) Type of comments about FM (5 items) (5) Perception of the quality and impact of the training in medical school on perception of FM (6) Perception of post-graduate training (3 items) given to year 6 students only.
The focus group participants were selected from the participants who responded to the survey in order to be as representative as possible of the whole student population with regards to their origin (urban or rural), gender and mode of financing their medical education (government subsidized or private students). Six focus groups (2 per study year) were organized with 5–9 students per group and consisted in in-depth discussions about 3 themes based on the preliminary results of phase 1: (a) the factors that influenced their specialty choice (students of year 6 only), (b) their views on specializing in family medicine in comparison with other specialties, (c) their views on the new curriculum and their recommendations. To stimulate the discussion about the image of Family Medicine compared to other specialties, participants were asked to rank a set of 13 specialties including family medicine doctors according to their level of difficulty, attractiveness and prestige. This was done by sorting cards into piles. This approach allowed access to people’s perceptions and invites the participants to structure and justify their representations.. Students organized the cards on a range of 4 to 10 levels and the level of ranking of FM was standardized on a maximum of 10 to allow comparison per year and per perception (for example if FM was classified on the 4th level out of 5, it was then standardized as 8 out of 10).
Data from the questionnaires were imported into an excel spreadsheet to enable descriptive statistics to be drawn from students’ answers. Focus group discussions were recorded, transcribed verbatim by the local investigator (ZI) and translated from Russian into English. The transcripts were analysed using the MAXQDA 2018 programme in a deductive approach using a framework based on 7 factors identified from a qualitative systematic review by Olid et al. looking at students’ attitudes and perceptions towards FM. (Table ) The final analysis consisted in aligning findings from the survey and from the FGD in each of the 7 Olid et al. themes. Findings were then synthetized and interpreted for each theme. In addition, new emerging themes were extracted and representative quotes were chosen for each theme. (Table ) The label following the quotes indicates the number of the FG and the year of study.
a. Participants Overall, 66% of registered students (953 out of 1449) completed the survey, with a similar percentage of students per study year (Table ). Out of the 953 surveyed students, 63% were female which is representative of the higher proportion of female student population at KSMA (60%). 63% came from cities and 24% from rural region. Overall, 47% of students received subsidized tuition from the government, the remaining 53% paid themselves for their studies. The proportion of subsidized students corresponds to the number of grants given by the government and this can vary from year to year. Forty-two students were recruited randomly for the focus group discussions, including 31 females, 20 government subsidized, and 32 coming from cities. b. Students’ interest for various specialties Figure illustrates the percent of Kyrgyz medical students interested in working in different specialties at year 1, 4 and 6 of their medical training. The interest for FM was the lowest of all specialties and decreased over the study years (24%, 10% and 8% in year 1, 4 and 6 respectively). The highest interest was for surgical specialties, which also decreased over the study years, with more than 50% students still interested at year 6 (80%, 63% and 55% respectively). Finally, the interest in other specialties (psychiatry, internal medicine, paediatrics, emergency medicine, and obstetrics-gynaecology) ranged from 20 to 45% with no clear trend over the study years. c. Factors influencing specialty choice Figure presents the relative importance of 12 factors for the choice of specialty for 6th year students (N = 315.). Access to high medical technologies, career opportunities, salary, patient interaction and possibility to work abroad were the top 5 factors rated as important or very important by more than 80% of the students. The least important was the continuation of a family legacy of doctors (27%). d. Difficulty, attractiveness and prestige of FM compared to other specialties Compared to the other 12 specialties FM was considered as very difficult by year 1 and year 6 students (Ranking in the pile sorting: 8 and 9.5/10). In contrast, its prestige and attractiveness, were considered moderate in year 1 (4.5 and 6.5/10 respectively) and were the lowest at year 6 (1.5/10 for both). In summary, year 6 students considered FM as the most difficult but less prestigious and attractive specialty. e. Students’ perception of FM Table presents the synthesis aggregating the quantitative (percentages by answers from the survey) and qualitative (quotes) findings organized around 7 themes presented by Olid et al.. The key findings for each theme are presented below. Broad scope and context of practice : Students from the 3 study years considered FM as a specialty with a wide field and a broad scope of practice, making it a difficult specialty. Lower interest or intellectually less challenging : Most students from year 4 and 6 perceived FM as unattractive and with limited career possibilities. They were very critical towards the profession of FM and repeatedly stated it was office work and boring. In their view, FM doctors can only manage minor problems, and have to refer their patients to specialists. Year 1 students, although acknowledging that FM is unpopular and highlighted the lack of development perspective, had a better image than students in Year 4 and 6. Influence of role models and society, other professionals and family : The key positive influencers are parents who practice FM and the rare professors who promote FM. Other professors on the contrary dismiss FM. Students therefore only have rare role models within society and family. Students reported that some teaching staff appeared not to know what FM is about. Worse, most comments about FM and the reform made by professors, family doctors, students and alumni were negative. Lower prestige : Students from all study years recognized that FM is not prestigious. They perceived it as poorly valued by the society but also by other medical doctors. They considered that specialists are more needed than FM. However, they also think that FM should be more prestigious. The low prestige is justified by the low income this profession gets, the poor working condition, not recognized as a specialty and lack of professional development opportunities. Lower remuneration : The issue of remuneration is indeed key in this profession and it is perceived by all students as much too low, not even allowing to live decently. It is thus a major obstacle to choosing FM. Medical school influences : The positive opinion of the 1st year students on the completion of a course in FM at post-graduate level whatever specialty will be chosen is linked to their awareness and understanding of the current needs of the health care system. This stands in contrary to 4th and 6th year students, who are less aware of the reform. Generally, FM lectures are perceived as boring and this is also linked to the teaching staff being insufficiently informed and trained about the role of FM. Thus the undergraduate training fails to stimulate interest of students towards FM. However, year 4 students report some lectures which helped them to understand what FM is about. Post-graduate training : From the survey results, a majority of 6th year students agree that two years PGME is enough for FM and that it prepares students sufficiently. However, during the focus groups, they express great reluctance toward the 2-year postgraduate training in FM since the residency is unpaid and will be costly for their families. When discussing the post-grad training, students thought they would all have to become FM doctors and disagreed with that. This misunderstanding may have caused their negative attitude towards the two-year residency training in FM . As for 1st year students, the focus group revealed that they were better informed of the ongoing reforms in the KR health care system and in medical education. In particular, they had a more positive attitude towards the post-graduate training in FM and were fully aware about the 2 year post-grad training that would follow the 6th years of studies. In addition to these 7 themes, two additional themes emerged from the qualitative analysis of FGD that were not in the Olid et al. framework. These themes could be specific to the Kyrgyz context. 1. Working conditions This theme was defined as Work location, salary and equipment in facilities allowing to practice in decent conditions. Students expressed clearly their needs in terms of basic living and working conditions. In addition to the poor remuneration already discussed, they raised the issue of chronic lack of equipment in rural areas, which does not allow professionals to answer the needs of the population. They also raise the issue of language barrier, since Kyrgyz is not the first language taught at school whereas in rural areas the population mostly speaks Kyrgyz. These factors are highlighted through the two following quotes: “You have to understand, no matter how we study, no matter how we want to be a family doctor, if we would not be provided with basic working and living conditions, there won’t be a FM. If there is no first-aid room or medical treatment room, it will discourage residents to work there. Also, the students are afraid to stay in a small village or town because they won’t be able to provide their family.“ (FG1-Y4) . “We need three factors. They are salary, working conditions, and equipment. It is true, even the building, and these walls treat patients.” (FG3-Y6) . 2. Social accountability/responsibility This theme was defined as the Need of becoming FM as a mission with regard to the country needs . Some students consider that they have to help people in the remote areas, that it is important for the country. They also have to convince families and friends of the importance of this mission. This sense of duty to help the country was the strongest in year 1. This theme is described in the quotes from two students in Year 1 and another from Year 4. “If we tell our friends that the profession of a family doctor is awesome, they will follow us. For example, I called my parents and told them that the profession of FM is good and our generation can change the situation with this specialty in our country”.(FG2-Y1) . “I think that is good. I am from Batken. I am government-sponsored student. Therefore, I am ready to go back to Batken. When I pass by hospitals, I see many people coming to Bishkek from the rural areas. Imagine coming from another town with their children to get medical help. Medical service is expensive and it takes long hours to get to the capital. However, if we go to rural areas, we will be able to help those people in their home town. (FG2-Y1) “it is my childhood dream to become a doctor because my father and mother are doctors. Since childhood, I have always been amazed by their way of life and they have always inspired me. I can say that each doctor is a hero. I saw them getting up at night and going to a clinic or to the patient; they can refuse from family events and celebrations to help someone. Now I want to have the same experience.” (FG1-Y4) .
Overall, 66% of registered students (953 out of 1449) completed the survey, with a similar percentage of students per study year (Table ). Out of the 953 surveyed students, 63% were female which is representative of the higher proportion of female student population at KSMA (60%). 63% came from cities and 24% from rural region. Overall, 47% of students received subsidized tuition from the government, the remaining 53% paid themselves for their studies. The proportion of subsidized students corresponds to the number of grants given by the government and this can vary from year to year. Forty-two students were recruited randomly for the focus group discussions, including 31 females, 20 government subsidized, and 32 coming from cities.
Figure illustrates the percent of Kyrgyz medical students interested in working in different specialties at year 1, 4 and 6 of their medical training. The interest for FM was the lowest of all specialties and decreased over the study years (24%, 10% and 8% in year 1, 4 and 6 respectively). The highest interest was for surgical specialties, which also decreased over the study years, with more than 50% students still interested at year 6 (80%, 63% and 55% respectively). Finally, the interest in other specialties (psychiatry, internal medicine, paediatrics, emergency medicine, and obstetrics-gynaecology) ranged from 20 to 45% with no clear trend over the study years.
Figure presents the relative importance of 12 factors for the choice of specialty for 6th year students (N = 315.). Access to high medical technologies, career opportunities, salary, patient interaction and possibility to work abroad were the top 5 factors rated as important or very important by more than 80% of the students. The least important was the continuation of a family legacy of doctors (27%).
Compared to the other 12 specialties FM was considered as very difficult by year 1 and year 6 students (Ranking in the pile sorting: 8 and 9.5/10). In contrast, its prestige and attractiveness, were considered moderate in year 1 (4.5 and 6.5/10 respectively) and were the lowest at year 6 (1.5/10 for both). In summary, year 6 students considered FM as the most difficult but less prestigious and attractive specialty.
Table presents the synthesis aggregating the quantitative (percentages by answers from the survey) and qualitative (quotes) findings organized around 7 themes presented by Olid et al.. The key findings for each theme are presented below. Broad scope and context of practice : Students from the 3 study years considered FM as a specialty with a wide field and a broad scope of practice, making it a difficult specialty. Lower interest or intellectually less challenging : Most students from year 4 and 6 perceived FM as unattractive and with limited career possibilities. They were very critical towards the profession of FM and repeatedly stated it was office work and boring. In their view, FM doctors can only manage minor problems, and have to refer their patients to specialists. Year 1 students, although acknowledging that FM is unpopular and highlighted the lack of development perspective, had a better image than students in Year 4 and 6. Influence of role models and society, other professionals and family : The key positive influencers are parents who practice FM and the rare professors who promote FM. Other professors on the contrary dismiss FM. Students therefore only have rare role models within society and family. Students reported that some teaching staff appeared not to know what FM is about. Worse, most comments about FM and the reform made by professors, family doctors, students and alumni were negative. Lower prestige : Students from all study years recognized that FM is not prestigious. They perceived it as poorly valued by the society but also by other medical doctors. They considered that specialists are more needed than FM. However, they also think that FM should be more prestigious. The low prestige is justified by the low income this profession gets, the poor working condition, not recognized as a specialty and lack of professional development opportunities. Lower remuneration : The issue of remuneration is indeed key in this profession and it is perceived by all students as much too low, not even allowing to live decently. It is thus a major obstacle to choosing FM. Medical school influences : The positive opinion of the 1st year students on the completion of a course in FM at post-graduate level whatever specialty will be chosen is linked to their awareness and understanding of the current needs of the health care system. This stands in contrary to 4th and 6th year students, who are less aware of the reform. Generally, FM lectures are perceived as boring and this is also linked to the teaching staff being insufficiently informed and trained about the role of FM. Thus the undergraduate training fails to stimulate interest of students towards FM. However, year 4 students report some lectures which helped them to understand what FM is about. Post-graduate training : From the survey results, a majority of 6th year students agree that two years PGME is enough for FM and that it prepares students sufficiently. However, during the focus groups, they express great reluctance toward the 2-year postgraduate training in FM since the residency is unpaid and will be costly for their families. When discussing the post-grad training, students thought they would all have to become FM doctors and disagreed with that. This misunderstanding may have caused their negative attitude towards the two-year residency training in FM . As for 1st year students, the focus group revealed that they were better informed of the ongoing reforms in the KR health care system and in medical education. In particular, they had a more positive attitude towards the post-graduate training in FM and were fully aware about the 2 year post-grad training that would follow the 6th years of studies. In addition to these 7 themes, two additional themes emerged from the qualitative analysis of FGD that were not in the Olid et al. framework. These themes could be specific to the Kyrgyz context.
This theme was defined as Work location, salary and equipment in facilities allowing to practice in decent conditions. Students expressed clearly their needs in terms of basic living and working conditions. In addition to the poor remuneration already discussed, they raised the issue of chronic lack of equipment in rural areas, which does not allow professionals to answer the needs of the population. They also raise the issue of language barrier, since Kyrgyz is not the first language taught at school whereas in rural areas the population mostly speaks Kyrgyz. These factors are highlighted through the two following quotes: “You have to understand, no matter how we study, no matter how we want to be a family doctor, if we would not be provided with basic working and living conditions, there won’t be a FM. If there is no first-aid room or medical treatment room, it will discourage residents to work there. Also, the students are afraid to stay in a small village or town because they won’t be able to provide their family.“ (FG1-Y4) . “We need three factors. They are salary, working conditions, and equipment. It is true, even the building, and these walls treat patients.” (FG3-Y6) .
This theme was defined as the Need of becoming FM as a mission with regard to the country needs . Some students consider that they have to help people in the remote areas, that it is important for the country. They also have to convince families and friends of the importance of this mission. This sense of duty to help the country was the strongest in year 1. This theme is described in the quotes from two students in Year 1 and another from Year 4. “If we tell our friends that the profession of a family doctor is awesome, they will follow us. For example, I called my parents and told them that the profession of FM is good and our generation can change the situation with this specialty in our country”.(FG2-Y1) . “I think that is good. I am from Batken. I am government-sponsored student. Therefore, I am ready to go back to Batken. When I pass by hospitals, I see many people coming to Bishkek from the rural areas. Imagine coming from another town with their children to get medical help. Medical service is expensive and it takes long hours to get to the capital. However, if we go to rural areas, we will be able to help those people in their home town. (FG2-Y1) “it is my childhood dream to become a doctor because my father and mother are doctors. Since childhood, I have always been amazed by their way of life and they have always inspired me. I can say that each doctor is a hero. I saw them getting up at night and going to a clinic or to the patient; they can refuse from family events and celebrations to help someone. Now I want to have the same experience.” (FG1-Y4) .
Main findings Our results show that the interest of Kyrgyz students for FM was the lowest of all specialties particularly during the final study year. Access to high medical technologies, career opportunities, salary, patient interaction and possibility to work abroad were the most important factors influencing specialty choice. Using the framework by Olid et al. , the 7 themes were applicable to the Kyrgyz context and two additional themes emerged. FM is considered a difficult specialty due to its wide scope of work, but not attractive because treating only minor health problems and providing limited career possibilities. In addition, poor prestige and insufficient remuneration have a discouraging influence on the specialty choice, since decent living conditions are not guaranteed. The medical school itself has a negative influence on student perceptions, in particular through the detrimental comments that students hear about the FM profession. Moreover, the recent lengthening of the PGME training from 1 to 2 years, further dissuaded some residents to become FM doctors. The two additional themes that emerged were the deficient working conditions in rural areas in Kyrgyzstan and the social accountability and responsibility of becoming FM as a mission to meet the population’s needs. These additional themes specific to the Kyrgyz context, might also be relevant for other LMICs. Comparison with literature One of the main problems revealed by our study was a distorted image of FM, which is a world-wide issue , as well as a career of low interest and prestige. Family medicine is viewed as a monotonous and non-technological medical practice with no intellectual challenge. Most students estimated that it had a lower status than hospital specialties and that the main aim of a FM doctor was to identify serious diseases/disorders in order to refer those patients for specialized care. Our study further confirmed the prevalent influence of the medical school. The importance of the academic environment on the choice of career for medical students has been clearly documented. Two studies have shown that the curriculum and academic discourse can play significant role on student’s professional identification and specialty choice. Students from schools where FM specialty is disregarded were less likely to practice primary care and the academic discourse prevented students’ ability to identify with the practice of FM. Moreover, an institutional culture not valuing FM through positive role models, and transmitting a distorted image of this specialty, for example through specialists’ negative attitudes towards family doctors were key features influencing student perceptions.. Our results are aligned with other studies with regards to FM’s image, but provide additional insight that might be specific to the Kyrgyz context or LMICs in general. The poor perceptions of FM might be explained by the fact that the pregraduate training did not include a FM curriculum until recently, and that the post-graduate training lasted only one-year post-certification at the time of the study. Finally, students raised a discrepancy with their personal needs as defined by Querido (salary, career options, status, work-life balance, labor content) and also illustrated by Fonkon et al. findings. Indeed, as medical career decisions are formed by a matching of perceptions of specialty characteristics with personal needs , it is easy to understand that working in remote rural areas, without the necessary equipment to diagnose and treat patients, and without a level of remuneration consistent with raising a family, were main obstacles to choosing this career. This is primarily a political issue out of the control of medical schools. Strengths and limitations A key strength of this study is that it adds to the body of literature in central Asia. The framework used to analyse the findings did not include any literature from low-and-middle-income countries and our study thus complements what is already known. We also acknowledge some limitations. Because of timing and funding, this study was carried out as a cross-sectional rather than longitudinal approach. A longitudinal approach would have allowed to follow the cohort over a longer period of time and assess their changes in perception more precisely. Differences that are being observed between the different years of study could also be due to true differences in each cohort rather than changes that occur due to the course of the study. Furthermore, the cohort came from one single institution in Kyrgyzstan. However, KSMA is considered the biggest and, in the capital, including students from a variety of background as presented in the results. Whilst participation of students was voluntary the data collection happened during compulsory teaching hours. Students might therefore have not felt the freedom to leave or not answer the questionnaire. In addition, about one third of the students did not complete the survey and we cannot exclude that the perceptions about FM can be even worst in reality. Finally, we used an existing questionnaire that presents some weaknesses. It included some items that were formulated as leading questions: their wording may have favoured certain responses. The 5-point Likert scale for several questions allowed respondents to give differentiated opinions, however many chose not to express an opinion by selecting the neutral option.
Our results show that the interest of Kyrgyz students for FM was the lowest of all specialties particularly during the final study year. Access to high medical technologies, career opportunities, salary, patient interaction and possibility to work abroad were the most important factors influencing specialty choice. Using the framework by Olid et al. , the 7 themes were applicable to the Kyrgyz context and two additional themes emerged. FM is considered a difficult specialty due to its wide scope of work, but not attractive because treating only minor health problems and providing limited career possibilities. In addition, poor prestige and insufficient remuneration have a discouraging influence on the specialty choice, since decent living conditions are not guaranteed. The medical school itself has a negative influence on student perceptions, in particular through the detrimental comments that students hear about the FM profession. Moreover, the recent lengthening of the PGME training from 1 to 2 years, further dissuaded some residents to become FM doctors. The two additional themes that emerged were the deficient working conditions in rural areas in Kyrgyzstan and the social accountability and responsibility of becoming FM as a mission to meet the population’s needs. These additional themes specific to the Kyrgyz context, might also be relevant for other LMICs.
One of the main problems revealed by our study was a distorted image of FM, which is a world-wide issue , as well as a career of low interest and prestige. Family medicine is viewed as a monotonous and non-technological medical practice with no intellectual challenge. Most students estimated that it had a lower status than hospital specialties and that the main aim of a FM doctor was to identify serious diseases/disorders in order to refer those patients for specialized care. Our study further confirmed the prevalent influence of the medical school. The importance of the academic environment on the choice of career for medical students has been clearly documented. Two studies have shown that the curriculum and academic discourse can play significant role on student’s professional identification and specialty choice. Students from schools where FM specialty is disregarded were less likely to practice primary care and the academic discourse prevented students’ ability to identify with the practice of FM. Moreover, an institutional culture not valuing FM through positive role models, and transmitting a distorted image of this specialty, for example through specialists’ negative attitudes towards family doctors were key features influencing student perceptions.. Our results are aligned with other studies with regards to FM’s image, but provide additional insight that might be specific to the Kyrgyz context or LMICs in general. The poor perceptions of FM might be explained by the fact that the pregraduate training did not include a FM curriculum until recently, and that the post-graduate training lasted only one-year post-certification at the time of the study. Finally, students raised a discrepancy with their personal needs as defined by Querido (salary, career options, status, work-life balance, labor content) and also illustrated by Fonkon et al. findings. Indeed, as medical career decisions are formed by a matching of perceptions of specialty characteristics with personal needs , it is easy to understand that working in remote rural areas, without the necessary equipment to diagnose and treat patients, and without a level of remuneration consistent with raising a family, were main obstacles to choosing this career. This is primarily a political issue out of the control of medical schools.
A key strength of this study is that it adds to the body of literature in central Asia. The framework used to analyse the findings did not include any literature from low-and-middle-income countries and our study thus complements what is already known. We also acknowledge some limitations. Because of timing and funding, this study was carried out as a cross-sectional rather than longitudinal approach. A longitudinal approach would have allowed to follow the cohort over a longer period of time and assess their changes in perception more precisely. Differences that are being observed between the different years of study could also be due to true differences in each cohort rather than changes that occur due to the course of the study. Furthermore, the cohort came from one single institution in Kyrgyzstan. However, KSMA is considered the biggest and, in the capital, including students from a variety of background as presented in the results. Whilst participation of students was voluntary the data collection happened during compulsory teaching hours. Students might therefore have not felt the freedom to leave or not answer the questionnaire. In addition, about one third of the students did not complete the survey and we cannot exclude that the perceptions about FM can be even worst in reality. Finally, we used an existing questionnaire that presents some weaknesses. It included some items that were formulated as leading questions: their wording may have favoured certain responses. The 5-point Likert scale for several questions allowed respondents to give differentiated opinions, however many chose not to express an opinion by selecting the neutral option.
This study highlighted the key factors responsible for the very low number of students choosing to become FM in Kyrgyzstan. FM is considered a difficult specialty due to its wide scope of work, but not attractive because treating only minor health problems and providing limited career possibilities. In addition, a major deterrent influence in this context is the poor working conditions encountered in remote areas, including lack of equipment and low remuneration, making it impossible to care properly for patients and live decently. This factor is presumably specific to many LMICs, it is out of reach of medical schools and has to be treated politically through improvements in the health system. Another prevalent influencer, common to many countries, is how medical schools through their institutional culture are not valuing FM through positive role models, and transmit a distorted image of this specialty. Successful interventions to increase the proportion of medical students choosing a FM career are characterized by diverse teaching formats, student selection, and good-quality teaching. The most effective strategies consists in (a) developing longitudinal, multifaceted, FM programs during the medical curriculum, and provide a high-quality experience in PHC by introducing FM practice clerkships in pre- and postgraduate level; (b) general practice needs to be championed within the undergraduate curriculum, especially by building it as an academic discipline with academic FM doctors in prominent and senior roles both in teaching and research, and as a specialty on the same level than others. Having FM doctors as teachers in the curriculum will improve professional identity formation for the students.. These findings served as a basis for recommendations destined specifically to help Kyrgyzstan improve its health system. The package of measures seems to indicate there is an increase of residents choosing FM and going to the regions (2019) (Fig. ). However, this aspect will have to be evaluated and studied in the long term.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
Dental hygiene students' matching accuracy when comparing antemortem dental radiographs and oral photographs to simulated postmortem
|
ac163f1a-529d-4467-a34d-6584b11f3c92
|
10099967
|
Forensic Medicine[mh]
|
INTRODUCTION Comparison of antemortem (AM) and postmortem (PM) forensic dental evidence is heavily relied upon for establishment of human identification and is a multidisciplinary effort during mass fatality incidents (MFIs) [ , , ]. Naturally, teeth and surrounding oral structures have characteristic features considered unique enough to be distinguishable from others, but human identification is not limited to naturally occurring features. Distinguishing features are greatly increased due to changes which occur during the lifespan and serve as clinically detectable dental identifiers (CDDI) classed as: restorative treatment, pathology, and morphology; unique features that are useful alone or in combination [ , , , ]. CDDI can be imaged and documented in AM and PM records via radiographs, photographs, and/or symbols on odontograms for comparative matching . The visualization and presentation of dental evidence must be of good quality—characteristics driven by dental industry standards, radiographer/photographer techniques, and the end user's expectations for legal and scientific integrity . AM dental images and record documentation are primarily produced by dental hygienists in private practice and are of great importance to human identifications. Specially trained dental hygienists may serve as personnel for disaster victim identification (DVI) to collect, organize, and transcribe AM and PM decedent data, and as consultants for the comparison team to support forensic odontologists. Radiographs are considered the most objective and reliable source of information in an AM dental record for showcasing the diversity of missing, filled, and unrestored patterns among teeth and surrounding structures [ , , ]. Diagnostic digital dental radiographs are regularly exposed and interpreted by dental hygienists for patients in accordance with the American Dental Association guidelines for patient selection criteria . Therefore, many AM dental radiographs are products of dental hygienists' professional expertise regarding proper visualization and presentation, and much of the evidence‐based diagnostic conclusions made by dentists are done in consultation with dental hygienists serving as the initial image interpreter. Dental hygienists complete Commission on Dental Accreditation (CODA) educational programs and are eligible for licensure as competent and self‐directed in radiation scientific principles, use of radiography equipment, quality assurance, and interpretation of findings [ , , ]. Radiography education and practice performed on mounted skulls and live patients are an integral part of laboratory and clinical hours for dental hygiene curriculum . Additionally, dental hygiene education includes intensive coursework on head and neck anatomy and tooth morphology; coursework integrated with patient clinical examinations and radiographic interpretations that require recognition, description, and documentation of typical and atypical landmarks and structures for natural and man‐made findings . Dental hygienists' educational preparation makes them ideal for assisting PM examinations, exposing dental radiographs on decedents, and for visual discrimination and interpretation of imaged CDDI for the comparative phase when specially trained [ , , , , ]. While dental radiographs are considered most critical for image comparisons, oral photographs can also provide visual information from varying views and are best when used in conjunction with other evidence [ , , , , , ]. Of particular importance, photographs can offer evidence of unique features of anterior teeth especially when patients are undergoing aesthetic improvements and photographed at various stages of cosmetic and orthodontic treatment [ , , , ]. Additionally, conditions such as early erosion, wear, and pathology may be pictorially represented even if not yet detectable on radiographs or difficult to adequately explain in written text . Dental anomalies such as rotated teeth, talon cusp, and cusp of Carabelli can also be useful when pictorially captured, especially in cases with little to no dental restorations . Interpol's guidance on AM collection states dental records should contain photographs of the dentition and of the patient when smiling as this can have a crucial role in cases when AM dental records are otherwise insufficient . Additionally, Bollinger et al concluded human identification can be aided by comparisons of at least three or more teeth in AM and PM photographs, especially when AM images have captured multiple CDDI of both the maxillary and mandibular arches . The “reconciliation” phase of DVI operations may include information from multifactorial assessments including a method known as comparative dental analysis where PM and AM data are objectively compared to determine identities of the decedents; a process often facilitated by the Microsoft Windows based WinID3 ® software program which can store thousands of cases of transcribed AM and PM dental data with collected images and odontograms [ , , , , ]. DVI comparison teams perform WinID3 ® sorting and filtering functions to generate a list of possible matching cases for further scrutiny . The examiner attempts to rule out unexplained discrepancies between AM and PM data; not to determine whether the two records are 100% identical but to determine whether they are sufficiently similar . Comparative identification is informed by use of dental record documentation and software comparisons but is dependent on the ability of the analyzer to visually recognize and discriminate between AM and PM imaged patterns of CDDI within the dental record [ , , , , ]. During DVI, a report regarding findings, recommendations, and conclusions is prepared by the forensic odontology section chief, and it may be informed by data collection and organization, as well as quality assurance consultation with other DVI team members such as dental hygienists [ , , ]. WinID3 ® codes are classed as “primary” or “secondary” to indicate surfaces of a tooth with a restoration . For example, a tooth with an amalgam filling on the distal and occlusal, and a resin filling on the mesial and occlusal will have a primary code of “MOD” and shaded black on those surfaces regardless of the restorative material used for the separate restorations; “E” for resin and “S” for silver amalgam will serve as secondary codes noted adjacent to the tooth number. WinID3 ® primary and secondary coding is considered “midlevel granularity” requiring a moderate amount of time and skill . More simplistic coding systems typically use one code to represent the existence of a restoration (low granularity), while complex, detailed coding systems utilize multiple codes to characterize restorations and the involved tooth surface (high granularity); as detail and granularity increases, the time, skill, and potential for inaccuracies is theorized to also increase . For this reason, the simplicity of WinID3 ® is highly accepted for MFIs especially when varying expertise and experience exist among supplemental personnel [ , , ]. In dental hygiene formal education and in clinical practice, detailed dental coding systems with complex “high granularity” are typically utilized for AM record keeping purposes [ , , ]. Though not formally educated on WinID3 ® , dental hygienists are educated on varying dental charting codes and can receive WinID3 ® training through standard operation procedures (SOP) which includes terminology, abbreviations, and data entry procedures for the adopted software to familiarize novice personnel . Additionally, WinID3 ® comes with sample cases complete with images for training novice operators . It is likely dental hygienists' education and general experience with dental software would be transferrable to WinID3 ® utilization, especially when provided software training. Dental hygienists are recognized for their dental expertise and considered “skilled personnel” for MFI DVI with background knowledge that can serve as transferrable skill when supplemented with training [ , , , ]. When MFIs occur, specially trained licensed dental hygienists may be tasked with AM and PM data management, as well as comparative phase consultation supervised by a forensic odontologist [ , , , , , , ]. Dental hygiene formal education standards and guidelines encompass the skills of recognizing and discriminating between unique features of dental materials, anatomical features, and pathology during clinical examinations, dental record keeping, and radiographic interpretation . However, despite recommendations [ , , , , , ], forensic odontology is rarely included in dental hygiene formal education, and the scientific literature does not adequately address dental hygienists' ability to perform DVI tasks due to transferrable skills. Therefore, the aim of this study was to assess the ability of senior dental hygiene students to accurately match simulated cases based on AM radiographs and oral photographs to PM WinID3 ® odontograms as possible transferrable DVI skills gained during formal education. This study also compared participant accuracy when matching AM radiographs to PM odontograms versus AM oral photographs to PM odontograms.
MATERIALS AND METHODS This study was given exempt approval by Old Dominion University Institutional Review Board (#1716413‐3). A qualitative balance design was used to evaluate match accuracy of simulated DVI cases among a convenience sample of 33 senior dental hygiene students from one baccalaureate degree granting institution. Students were recruited via email to view a PowerPoint presentation regarding general information on terminology, interpretation of primary and secondary WinID3 ® codes, and were provided an example of a correctly matched case which was not used for data collection. Examples of the PowerPoint learning content presented to participants can be found in Figure . Participants signed informed consent then completed a Qualtrics survey with drag and drop features where they were provided a total of 10 cases: 5 mismatched sets of digital AM full mouth series (FMS) radiographs and corresponding WinID3 ® PM odontograms to indicate match sets, as well as 5 mismatched sets of digital AM intraoral photographs and corresponding WinID3 ® odontograms to indicate match sets. The AM radiographic and photographic images were collected retrospectively from an educational dental hygiene care clinic where patients sign a release form granting permission for clinical data to be used for research purposes. The AM FMS radiographs were exposed using Schick Elite digital sensors and the AM 5‐view sets of intraoral photographs were imaged by a Canon EOS Rebel T6 digital single‐lens reflex (SLR) camera with ring flash. Images used in this study were not considered to be of perfect diagnostic quality but were assessed by the researchers and deemed appropriate for research purposes. Selected radiographs and photographs included a variety of CDDI such as missing teeth and restorative treatments (amalgam fillings, resin fillings, crowns, implants, bridges, and root canal therapy); distinguishing features existed from AM images to PM WinID3 ® odontograms. Five patient records (labeled A‐E), each including a FMS and set of five intraoral photographs were selected to serve as AM images for the simulated cases. Researchers created simulated WinID3 ® PM odontograms for comparison against the AM images. Examples of images and WinID3 ® odontograms used for the study can be seen in Figure . Cases varied in CDDI complexity and were categorized based on the number of restored and missing teeth as having 1–10 identifiers or 11–40 identifiers (Table ). The presented cases did not include an odd number of “no match” samples. Participant match accuracy per image type was assessed for performance differences between radiographic and photographic matching abilities. A researcher‐designed Qualtrics posttest asked Likert scale questions of perceived levels of difficulty, confidence in making matches, and demographics. SPSS software was utilized for statistical analyses, and statistical significance was set at α = 0.05.
RESULTS Thirty‐one participants completed the research for a completion rate of 93.9%. All participants were female senior dental hygiene students, almost half were Caucasian ( n = 15, 48.3%), and the majority were aged 18–29 years ( n = 25, 80.6%). Table summarizes research participant demographics. Participant match accuracy for cases A, C, and D with numerous dental identifiers (11–40 CDDI) ranged from M = 93.5 to M = 77.4. Match accuracy declined for cases B and E with fewer dental identifiers (1–10 CDDI) ( M = 58.1 to 41.9). Figure shows participant match accuracy trends according to the number of visible CDDI for cases based on the image type. McNemar's chi square revealed no statistical differences in participants' match abilities depending on image type (radiographs vs photographs): p = 0.687 (case A), p = 0.388 (case B), p = 0.625 (case C), p = 1.000 (case D), and p = 0.774 (case E). Most participants (74.2%) indicated photo matching as more challenging compared to radiograph matching despite quantitative findings showing that matching performance was not statistically different depending on image type. Figure shows that most participants (70.9%) indicated experiencing none to slight difficulty when attempting to match radiographs to odontograms, and 87% indicated slight to moderate difficulty when matching photographs to odontograms. When asked about perceived confidence, 93.5% indicated they were moderately confident in correctly matching the cases.
DISCUSSION In this novel study, senior dental hygiene students were provided a special training presentation to prepare them for use of WinID3 ® comparisons with AM radiographs and photographs to assess their ability to make identification matches; a task which dental hygienists assist during MFIs. These participants had no prior experience or education in forensics or exposure to WinID3 ® . However, they were in their final semester of the dental hygiene program, deemed competent with dental radiology including interpretation skills, and were well‐versed with a dental record software and charting system of high granularity. Dental coding accuracy and interpretation is central to DVI so cases can be ranked for possible matches. However, the DVI comparison team must visually scrutinize the record against available images to provide consultation with forensic odontologists, so they may prepare a report of findings and recommendations regarding decedent cases. To date, scientific literature lacks research which assesses such DVI skills of licensed dental hygienists and the dental hygiene educational programs they attended. This study is unique in that it assessed senior dental hygiene students who were not yet licensed so their knowledge and skills resulting from formal education could be ascertained without the influence of work experience that may influence interpretation skills over time. Results of this study suggest dental hygiene graduates are well‐positioned to further their education by learning more about forensic odontology to prepare themselves as supportive personnel for DVI teams if needed for such service in the future. Results revealed the students' education did prepare them with transferrable knowledge and skills and allowed successful completion of the simulated DVI task. Participants collectively performed well with accuracy rates ranging from 93.5% to 83.9% for five matching tasks involving 11–40 CDDI, except one which yielded an accuracy rate of 77.4% based on AM photographs. These accuracy rates are similar to findings reported by Pinchi et al in a study inclusive of 20 senior dental students who matched AM and PM radiographs with accuracy rates ranging from 97% to 89% . Pinchi also reported forensic odontologist participants outperformed the students and demonstrated less inter‐operator variability, pointing to reliable expert opinion among the forensic odontologists . Importantly, this research reinforces the need for forensic odontologists to be the ones who make reconciliation recommendations based on dental findings; however, during MFIs it is also important to know the abilities and limitations of supplemental personnel who may assist with DVI comparisons. Results of the current study support this need and suggest dental hygienists can apply competencies from their educational preparation to contribute expertise for visually comparing AM images against PM WinID3 ® search results to assist in narrowing down the list of possible matches. A similar suggestion was made by Wenzel et al who assessed matching accuracy of AM and PM bitewing radiographs among 10 dental students and 3 forensic expert controls . Wenzel reported half the students made 1 to 2 false‐positive matches, except for one outlier which made 13 false‐positive matches, and the 3 experts made no false positives . However, among those same participants (3 DVI experienced experts and 10 novice students), all experts made false‐negative matches and all but one student made a false‐negative match . Wenzel concluded half the students performed with accuracy comparable to the experts and novice dental experts should be considered for DVI assistance during MFIs due to their demonstrated abilities with pattern recognition while interpreting and comparing AM and PM data . Bradshaw et al assessed dental hygiene students' dental charting of three human skull dentitions and comparisons with bitewing radiographs to determine match accuracy . Participants' dental charting accuracy scores ranged from 91% to 99%, and their accuracy for matching skulls and radiographs was 100% . It is important to note participant accuracy rates in the current study declined for more difficult cases with 1–10 CDDI and ranged from 41.9% to 58.1%. However, declined rates exist for other research studies of match accuracy with little to no CDDI among participants inclusive of dental students, dentists, and various types of forensic experts (odontologists, anthropologists, and radiologists with DVI experience). Gorza et al compared 19 forensic experienced experts and reported that match accuracy rates increased for cases which displayed higher numbers of visual similarities between AM and PM images . Additionally, participants in Gorza's study self‐reported their ability to decide on matches was negatively affected when there was insufficient visually distinguishing data in the images to facilitate comparisons . Chaim et al conducted a balance design match accuracy study of dentists and forensic odontologists and reported case difficulty was ensured by reduced rates of accuracy and confidence among participants . Pinchi et al reported case difficulty due to lacking visual identifiers resulted in reduced match accuracy for all participants, still forensic odontologists outperformed others and demonstrated the least amount of variability . Pinchi concluded that only professionals with dental training should participate in radiographic comparisons since they outperformed non‐dental trained participants . Considering the need for DVI personnel to be specially trained with demonstrated competence, the current study helps meet this need and addresses a gap in the literature by assessing how well dental hygienists are prepared by their educational programs and identifies additional training needed for them to best serve DVI when MFIs occur. Due to variations of methodology and research participants among the studies mentioned above, it is difficult to synthesize the results and draw consistent conclusions regarding DVI supplemental personnel. However, the concerning need for research regarding DVI personnel qualifications and abilities is often cited in the literature and considering the service of dental hygienists in this role, the current research helps address this need. Additionally, this research supports the First International Forensic Radiology Summit recommendations which recognized the need for research inclusive of multidisciplinary teams and with individuals of minimal direct forensic experience . Furthermore, the combined use of radiographs and photographs to support a forensic investigation is an example of multimodal imaging and was identified as a research priority for forensics . The current study supports these research recommendations by assessing and comparing dental hygiene students' ability to utilize radiographs and photographs to decide on matches. In a study by Agelakopoulos et al., photographs were more effective in narrowing down matches when compared to radiographs . Still, radiographs are critical and commonly part of retrieved AM dental records; when complemented by photographic images, the ability to narrow down matches may be increased considerably . In fact, most DVI software like WinID3 ® only allows for coding entries of restorations and not morphologic or pathologic features . Unique and naturally occurring morphologic dental identifiers are useful and increasingly more important due to the frequency of people maintaining their dentition with less need for restorative work as a result of preventative dental hygiene care. Therefore, it has been suggested more forensic research should investigate the utility of photographic images due to their ability to best capture dental crown morphology and soft tissue contours compared with radiographs—images which best capture root morphology, maxillofacial bone structures, and restoration contour lines . Future match accuracy studies of licensed dental hygienists should assess their performance with actual DVI cases varying in CDDI presentation. Interestingly, participants self‐reported more perceived challenge (74.2%) and difficulty (87%) when matching cases based on the photographs despite no statistical difference in performance outcomes according to image type. There was also a high frequency (93.5%) of self‐reported perceived moderate confidence for overall ability in correctly deciding on matches. This finding is similar to Page et al who reported confidence levels ranging from 90% to 93% in a study of dentists and forensic odontologists in a match accuracy study . It is possible the findings of the current study occurred because the participants received more practice with radiographic interpretations in the education program compared to interpretation of photographs. Their perceptions and confidence may have been influenced by perceived comfort with radiographic interpretations they routinely perform in the educational program, as this has been cited as a type of cognitive bias in forensic odontology studies . However, research shows the link between confidence and performance is weak , and it is uncertain that findings of the current study can be interpreted as being affected by overconfidence. Cognitive bias may not be completely avoidable but should be recognized and mitigation should be in place to minimize the effect during forensic practice and training. Therefore, future studies of practicing and student dental hygienists is needed to assess types of cognitive bias they may be susceptible to when serving as DVI personnel. There were several limitations of the current study. It has been suggested that DVI match accuracy research results may be impacted by participant cognitive bias. It is possible participants of the current study were affected by the Hawthorne effect and/or observer effect which have been cited in other forensic studies . The researchers attempted to control the Hawthorne effect by making it known to participants their responses were not graded and would not affect their position in the dental hygiene program. However, in order to orient participants to the experiment task to be performed, they were told interpreting the images and WinID3 ® charts may be similar to interpretation techniques they rely on when doing similar tasks for their educational program. This suggestion may have contributed to observer effect cognitive bias by unintendedly suggesting an expected behavior. However, this project did require participants to use prior knowledge to interpret visual symbols on WinID3 ® odontograms which is not part of their formal education. Research suggests undergraduate educational programs should address cognitive bias related to forensic purposes and mitigation strategies should be implemented to control the effects of potential biases [ , , ]. Therefore, this research supports the 5th and 8th recommendations from the National Research Council that research is needed on biases of practitioners involved with forensic examinations so mitigation strategies can be devised and implemented as part of standard operating procedures . Additionally, participants consisted of a small convenience sample from one educational institution and therefore the results cannot be generalized to other dental hygiene programs, students, or licensed dental hygienists. Students of any discipline should not be part of DVIs; however, the current study helps fill a gap in the literature to define transferable skills as a result of formal education. Finally, images provided to the research participants were limited and not totally representative of what they may encounter during actual DVI. For example, only screen shots of the WinID3 ® odontograms were provided which created a limited view and restricted access to functionality features built into the software. Additionally, the photographs and radiographs were presented in Qualtrics, an electronic survey data collection tool also utilized by Chaim et al. . Qualtrics allowed images to be magnified; however, the magnification was limited and did not allow full screen magnification which can be achieved in most dental image software including the software these participants were accustomed to using in the dental hygiene educational program. Furthermore, this simulated study utilized FMS radiographs and comprehensive intraoral photographs which are not always available in AM dental records and is not necessarily representative of all DVIs. Dental hygiene students demonstrated general success in matching AM images with simulated PM WinID3 ® odontograms, suggesting their educational program prepared them with transferrable DVI skills. However, forensic odontology educational opportunities are not readily available in dental hygiene formal education, and little is known about the transferability of their skills to assist forensic odontologists during actual DVI events. While much research has been performed to validate the uniqueness of dental radiographs, oral photographs, and charting for human identification purposes, this study is innovative due to its focus on assessing the ability of dental hygiene students to accurately match AM and PM data. More research is needed in education and practice when preparing dental hygienists for forensic‐based service.
The authors have nothing to disclaim and have no conflicts of interest.
|
The value of thymus and activation related chemokine immunohistochemistry in classic Hodgkin lymphoma diagnostics
|
cda8a3ad-4731-4f51-b7a9-f50797444da6
|
10100154
|
Anatomy[mh]
|
Classic Hodgkin lymphoma (cHL) is characterised by the presence of Hodgkin and Reed–Sternberg (HRS) cells in a background of reactive immune cells. Although originating from germinal‐centre B cells, HRS cells have lost their B cell phenotype by down‐regulation of B cell transcription factors. This is reflected by weak PAX‐5 expression and absence of (or weak) CD20 and CD79a expression by immunohistochemistry (IHC). Instead, HRS cells virtually always express CD30 and IRF4/MUM1 and are usually positive for CD15. The cHL tumour micro‐environment consists of variable numbers of T cells, B cells, plasma cells, macrophages, eosinophils and neutrophils. T cells are most consistently present and often form so‐called rosettes. , The diagnosis of cHL is straightforward in most cases, but diagnostic difficulties can occur because cHL features may overlap with those of reactive lymphadenopathies, nodular lymphocyte‐predominant Hodgkin lymphoma (NLPHL) and mature lymphoid neoplasms. These cHL mimics require a different treatment and have a better or sometimes much worse prognosis; for example, in case of angioimmunoblastic T cell lymphoma. Although the majority of cHL cases can be diagnosed reliably by combining morphology, immunohistochemistry and clinical characteristics, there is still room for improvement. CC chemokine 17 (CCL17), also known as thymus and activation‐regulated chemokine (TARC), is a chemokine which is extremely highly expressed and secreted by HRS cells. This expression is much higher than the physiological expression by normal dendritic and epithelial cells of the thymus. TARC binds to the CC chemokine receptor 4 (CCR4) on CD4 + T cells and strongly contributes to the characteristic T cell‐rich tumour micro‐environment of cHL. Because of the high secretion of TARC by HRS cells, approximately 85% of cHL patients have elevated serum or plasma levels at diagnosis compared to healthy controls (median ~400× higher). Elevated TARC levels correspond with higher‐stage disease and tumour volume. , Previous research from our group and others has also shown that TARC levels in peripheral blood accurately correlate with response to clinical treatment. , , In earlier retrospective work, approximately 86% of cHL patients showed positive staining of HRS cells for TARC by IHC indicating its potential diagnostic value. Despite these promising results, no study has yet confirmed the value of TARC IHC in daily diagnostic practice. The aim of the current study was to evaluate the diagnostic value of TARC IHC in differentiating between cHL, NLPHL, reactive lymphadenopathies and mature lymphoid neoplasms with HRS‐like cells.
Study Cohort and Data Collection This study was performed at the department of Pathology and Medical Biology of the University Medical Center Groningen, a tertiary referral centre covering lymphoma diagnostics in approximately 5 million inhabitants of the Netherlands. TARC IHC was introduced in March 2014 and used by consulting haematopathologists S.R. and A.D. for in‐house diagnostics, revisions and consultations. In this prospective diagnostic setting, the governing World Health Organisation (WHO) classification of lymphomas was followed and TARC IHC was used as an addition to the regular diagnostic work‐up, including molecular diagnostic T cell clonality analysis to distinguish between cHL and T cell lymphoma when needed. Only the first diagnostic specimen was included in the case of multiple diagnostic procedures in the same patient. The study was conducted in accordance with the Declaration of Helsinki and the medical ethical review board of the University Medical Center Groningen approved the protocol under #RR202100080. TARC Immunohistochemistry Paraffin tissue sections (3 μm) were incubated with polyclonal goat anti‐human TARC antibody (1:800; R&D Systems, Minneapolis, MN, USA) on the automated Benchmark ULTRA platform (Ultra CC1, 52 min; Roche, Ventana Medical Systems, Oro Valley, AZ, USA). For each TARC stain, a section of cHL tissue was applied on the same slide as an external positive control. Positive TARC staining was defined as cytoplasmic positivity in the HRS cells. We subdivided TARC‐positive cases by the quality of staining (strong versus weak) and by completeness (completely positive versus a fraction of the tumour cells positive). Complete TARC staining was defined as positivity in > 90% of tumour cells. Data Analysis Data were recorded and analysed by using SPSS version 23 (released 2015: IBM SPSS Statistics for Windows, version 23.0; IBM Corporation, Armonk, NY, USA). Pearson's χ 2 test was used to test the relationship between patient characteristics and TARC positivity. The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions. Ethics Approval Statement This study was conducted in accordance with the declaration of Helsinki. The medical ethical review board of the University Medical Center Groningen approved the protocol under #RR202100080.
This study was performed at the department of Pathology and Medical Biology of the University Medical Center Groningen, a tertiary referral centre covering lymphoma diagnostics in approximately 5 million inhabitants of the Netherlands. TARC IHC was introduced in March 2014 and used by consulting haematopathologists S.R. and A.D. for in‐house diagnostics, revisions and consultations. In this prospective diagnostic setting, the governing World Health Organisation (WHO) classification of lymphomas was followed and TARC IHC was used as an addition to the regular diagnostic work‐up, including molecular diagnostic T cell clonality analysis to distinguish between cHL and T cell lymphoma when needed. Only the first diagnostic specimen was included in the case of multiple diagnostic procedures in the same patient. The study was conducted in accordance with the Declaration of Helsinki and the medical ethical review board of the University Medical Center Groningen approved the protocol under #RR202100080.
Immunohistochemistry Paraffin tissue sections (3 μm) were incubated with polyclonal goat anti‐human TARC antibody (1:800; R&D Systems, Minneapolis, MN, USA) on the automated Benchmark ULTRA platform (Ultra CC1, 52 min; Roche, Ventana Medical Systems, Oro Valley, AZ, USA). For each TARC stain, a section of cHL tissue was applied on the same slide as an external positive control. Positive TARC staining was defined as cytoplasmic positivity in the HRS cells. We subdivided TARC‐positive cases by the quality of staining (strong versus weak) and by completeness (completely positive versus a fraction of the tumour cells positive). Complete TARC staining was defined as positivity in > 90% of tumour cells.
Data were recorded and analysed by using SPSS version 23 (released 2015: IBM SPSS Statistics for Windows, version 23.0; IBM Corporation, Armonk, NY, USA). Pearson's χ 2 test was used to test the relationship between patient characteristics and TARC positivity. The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
This study was conducted in accordance with the declaration of Helsinki. The medical ethical review board of the University Medical Center Groningen approved the protocol under #RR202100080.
Study Population A total of 383 cases were included: 155 (40.5%) lymph node needle biopsies, 170 (44.4%) lymph node excisions, 48 (12.5%) extranodal tissue needle biopsies and 10 (2.6%) extranodal tissue excisions. A total of 190 cases of cHL were analysed, 20 cases of NLPHL, 64 reactive conditions and 109 cases of mature lymphoid neoplasms with (some) HRS‐like cells that were at least weakly CD30‐positive. TARC Staining in cHL Cytoplasmic positivity of TARC was seen in 91.6% of all patients diagnosed with cHL. The majority of cHL cases (77.4%) showed strong staining of all HRS cells. In some cases the area surrounding positive HRS cells stained weakly, with a gradient moving away from the HRS cells, probably indicating extracellular localisation of secreted TARC. Occasionally a few cells in the reactive background showed aspecific staining, but these were small and easily discernable from tumour cells. A representative example is shown in Figure . In 92 cHL cases (48.4%) the diagnosis was made on a lymph node needle biopsy. As a result of this high percentage of needle biopsies, many cHL cases were not subtyped. Nonetheless, lymphocyte‐rich (LRCHL) and mixed cellularity cHL appeared to show less TARC staining (Table ). Interestingly, TARC staining was also related to the EBV status of the tumours. Of all TARC‐negative cases, 68.8% was EBV‐positive. Negative, weak or incomplete TARC staining was seen in 43.4% (23 of 54) of EBV‐positive cHL cases compared to 15.7% (20 of 127) in the EBV‐negative cHL subgroup ( P = 0.001). Examples of weak and incomplete TARC staining in an EBV‐positive case can be seen in Figure . In addition, patient characteristics were significantly related to TARC positivity. TARC IHC was more frequently positive in female patients compared to males ( P = 0.017). Also, younger patients (aged < 45 years) were more likely to have positive TARC IHC ( P = 0.008). To gain more insight into the added value of TARC staining in the diagnosis of cHL, we compared the results to staining of CD30, CD15, PAX‐5, CD20 and CD79a in tumour cells (Table ). As expected, the majority (87.3%) of cHL cases showed strong and complete CD30 staining, while the other cases showed relatively weak expression. CD15 was strongly positive in 47.8%, weak and/or incomplete in 36.4% and completely negative in 15.7% of cases. When CD30 IHC was not strong and complete, TARC showed strong and complete staining in 12 of 24 (50%) of cases. In cases with no CD15 staining, 69% was positive for TARC. As expected, PAX‐5 was the most potent indicator of B cell lineage, as 83.1% of cases showed weak positivity (compared to surrounding B lymphocytes) and 10.1% showed relatively strong and complete staining. CD20 and CD79a showed weak and/or incomplete positivity in 17.8 and 34.2% of cases, respectively. As Table shows, the majority of cHL cases with B cell lineage marker expression showed strong and complete TARC staining. TARC Staining in NLPHL and LRCHL In NLPHL ( n = 20), 90% of cases showed no TARC staining of tumour cells at all, while in 10% there was only weak staining of sporadic tumour cells. There was no staining of cells in the inflammatory background. Figure shows a representative NLPHL case. In EBV‐negative LRCHL ( n = 8), TARC was strongly positive in all tumour cells in six cases and this helped in the distinction with NLPHL. One challenging EBV‐negative LRCHL case was TARC‐negative and was preferred over NLPHL by being weakly CD30‐positive, partly CD15‐positive and by the lack of follicular dendritic networks, although expression of B cell markers was present. The other EBV‐negative LRCHL case was weakly TARC‐positive and strongly positive for both CD15 and CD30. Lack of TARC Staining in Reactive Conditions Lymph node biopsies with immunoblastic, large, CD30‐positive cells may raise suspicion of cHL, especially in needle biopsies. In 64 cases, with 30 needle biopsies, none showed staining for TARC. Recognition as a reactive lymphadenopathy was based on the lack of classical HRS cells, low and/or heterogeneous CD30 staining, strong CD20 positivity and lack of an appropriate tumour micro‐environment. Figure shows a representative case of reactive lymphadenopathy. TARC Staining in Mature Lymphoid Neoplasms with HRS ‐Like Cells TARC immunostaining was performed in 109 lymphoma cases with a final diagnosis other than cHL, NLPHL or reactive (Table ). This concerned cases in which HRS‐like cells with at least weak CD30 staining were present (Figure ). Strong and complete TARC staining of HRS‐like cells was seen in only six cases consisting of EBV‐positive diffuse large B cell lymphoma (DLBCL; two of 14), chronic lymphocytic leukaemia/small lymphocytic lymphoma with EBV‐positive HRS‐like blasts (one of one), peripheral T cell lymphoma not otherwise specified with EBV‐positive HRS‐like blasts (one of 14) and disseminated mycosis fungoides (two of two) and because most of the strongly TARC‐positive mimics were EBV‐positive, we also analysed the other EBV‐positive mimics in more detail. In 10 T cell lymphomas that harboured a secondary EBV‐positive B cell lymphoproliferation, one was strongly positive and five were weakly positive for TARC. The EBV‐positive DLBCLs (nine of which were immune deficiency‐related) consisted of two strongly and six weakly TARC staining cases. All these cases could be clearly separated from EBV‐positive cHL by acknowledging that only a small proportion of the neoplastic cells consisted of HRS‐like cells. However, TARC staining patterns on their own showed considerable overlap with those in EBV‐positive cHL. Well‐known cHL mimics, such as primary mediastinal B cell lymphoma and B cell lymphoma unclassifiable with features intermediate between DLBCL and cHL, frequently showed weak and/or incomplete staining (17 of 19), but none of these cases exhibited strong and complete positivity.
A total of 383 cases were included: 155 (40.5%) lymph node needle biopsies, 170 (44.4%) lymph node excisions, 48 (12.5%) extranodal tissue needle biopsies and 10 (2.6%) extranodal tissue excisions. A total of 190 cases of cHL were analysed, 20 cases of NLPHL, 64 reactive conditions and 109 cases of mature lymphoid neoplasms with (some) HRS‐like cells that were at least weakly CD30‐positive.
Staining in cHL Cytoplasmic positivity of TARC was seen in 91.6% of all patients diagnosed with cHL. The majority of cHL cases (77.4%) showed strong staining of all HRS cells. In some cases the area surrounding positive HRS cells stained weakly, with a gradient moving away from the HRS cells, probably indicating extracellular localisation of secreted TARC. Occasionally a few cells in the reactive background showed aspecific staining, but these were small and easily discernable from tumour cells. A representative example is shown in Figure . In 92 cHL cases (48.4%) the diagnosis was made on a lymph node needle biopsy. As a result of this high percentage of needle biopsies, many cHL cases were not subtyped. Nonetheless, lymphocyte‐rich (LRCHL) and mixed cellularity cHL appeared to show less TARC staining (Table ). Interestingly, TARC staining was also related to the EBV status of the tumours. Of all TARC‐negative cases, 68.8% was EBV‐positive. Negative, weak or incomplete TARC staining was seen in 43.4% (23 of 54) of EBV‐positive cHL cases compared to 15.7% (20 of 127) in the EBV‐negative cHL subgroup ( P = 0.001). Examples of weak and incomplete TARC staining in an EBV‐positive case can be seen in Figure . In addition, patient characteristics were significantly related to TARC positivity. TARC IHC was more frequently positive in female patients compared to males ( P = 0.017). Also, younger patients (aged < 45 years) were more likely to have positive TARC IHC ( P = 0.008). To gain more insight into the added value of TARC staining in the diagnosis of cHL, we compared the results to staining of CD30, CD15, PAX‐5, CD20 and CD79a in tumour cells (Table ). As expected, the majority (87.3%) of cHL cases showed strong and complete CD30 staining, while the other cases showed relatively weak expression. CD15 was strongly positive in 47.8%, weak and/or incomplete in 36.4% and completely negative in 15.7% of cases. When CD30 IHC was not strong and complete, TARC showed strong and complete staining in 12 of 24 (50%) of cases. In cases with no CD15 staining, 69% was positive for TARC. As expected, PAX‐5 was the most potent indicator of B cell lineage, as 83.1% of cases showed weak positivity (compared to surrounding B lymphocytes) and 10.1% showed relatively strong and complete staining. CD20 and CD79a showed weak and/or incomplete positivity in 17.8 and 34.2% of cases, respectively. As Table shows, the majority of cHL cases with B cell lineage marker expression showed strong and complete TARC staining.
Staining in NLPHL and LRCHL In NLPHL ( n = 20), 90% of cases showed no TARC staining of tumour cells at all, while in 10% there was only weak staining of sporadic tumour cells. There was no staining of cells in the inflammatory background. Figure shows a representative NLPHL case. In EBV‐negative LRCHL ( n = 8), TARC was strongly positive in all tumour cells in six cases and this helped in the distinction with NLPHL. One challenging EBV‐negative LRCHL case was TARC‐negative and was preferred over NLPHL by being weakly CD30‐positive, partly CD15‐positive and by the lack of follicular dendritic networks, although expression of B cell markers was present. The other EBV‐negative LRCHL case was weakly TARC‐positive and strongly positive for both CD15 and CD30.
TARC Staining in Reactive Conditions Lymph node biopsies with immunoblastic, large, CD30‐positive cells may raise suspicion of cHL, especially in needle biopsies. In 64 cases, with 30 needle biopsies, none showed staining for TARC. Recognition as a reactive lymphadenopathy was based on the lack of classical HRS cells, low and/or heterogeneous CD30 staining, strong CD20 positivity and lack of an appropriate tumour micro‐environment. Figure shows a representative case of reactive lymphadenopathy.
Staining in Mature Lymphoid Neoplasms with HRS ‐Like Cells TARC immunostaining was performed in 109 lymphoma cases with a final diagnosis other than cHL, NLPHL or reactive (Table ). This concerned cases in which HRS‐like cells with at least weak CD30 staining were present (Figure ). Strong and complete TARC staining of HRS‐like cells was seen in only six cases consisting of EBV‐positive diffuse large B cell lymphoma (DLBCL; two of 14), chronic lymphocytic leukaemia/small lymphocytic lymphoma with EBV‐positive HRS‐like blasts (one of one), peripheral T cell lymphoma not otherwise specified with EBV‐positive HRS‐like blasts (one of 14) and disseminated mycosis fungoides (two of two) and because most of the strongly TARC‐positive mimics were EBV‐positive, we also analysed the other EBV‐positive mimics in more detail. In 10 T cell lymphomas that harboured a secondary EBV‐positive B cell lymphoproliferation, one was strongly positive and five were weakly positive for TARC. The EBV‐positive DLBCLs (nine of which were immune deficiency‐related) consisted of two strongly and six weakly TARC staining cases. All these cases could be clearly separated from EBV‐positive cHL by acknowledging that only a small proportion of the neoplastic cells consisted of HRS‐like cells. However, TARC staining patterns on their own showed considerable overlap with those in EBV‐positive cHL. Well‐known cHL mimics, such as primary mediastinal B cell lymphoma and B cell lymphoma unclassifiable with features intermediate between DLBCL and cHL, frequently showed weak and/or incomplete staining (17 of 19), but none of these cases exhibited strong and complete positivity.
In this study we show that TARC is a highly sensitive and specific tumour cell marker for cHL and that TARC IHC can be a helpful addition in daily routine diagnostics. In our prospective setting, TARC positivity of HRS cells was seen in 92% of all cHL cases, predominantly with a strong cytoplasmic staining pattern (77%). TARC usually stains each and every HRS cell and is virtually not expressed in the tumour micro‐environment, favouring its use for both screening purposes and assessment of co‐expression with other tumour cell markers. This very high specificity is a clear advantage over CD30 that often stains reactive cells in the vicinity of HRS cells or in lymph nodes without cHL. It should be noted that some monoclonal TARC antibodies appear to cross‐react and erroneously stain histiocytes and macrophages. We therefore employed and highly recommend a non‐cross‐reacting polyclonal TARC antibody. , , TARC IHC can help in differential diagnostic considerations in occasional challenging cases. NLPHL can be such a differential diagnosis, as it closely resembles the lymphocyte‐rich subtype of cHL. The distinction can usually be made by showing that the tumour cells in NLPHL have an intact B cell phenotype with strong expression of CD20, CD79a and PAX‐5, while lacking strong CD30 staining. However, HRS cells in cHL can also show variable staining of these B cell markers, and tumour cells in NLPHL are sometimes weakly CD30‐positive or even CD15‐positive. Patterns of follicular dendritic networks detected by CD21, CD23 or CD35 IHC can help in differentiating between the two entities, but these patterns are not always present in NLPHL and can be missed in needle biopsies. In our study, TARC IHC by itself could differentiate between cHL and NLPHL in most cases, as strong and complete TARC staining was not seen in 18 of 20 NLPHL cases. The other two cases only showed weak staining of sporadic tumour cells. In some instances, reactive lymphadenopathies may raise suspicion of cHL, especially in needle biopsies in adolescents and young adults. We evaluated 64 cases in which CD30‐positive cells were present that were considered potential HRS cells. These cells were usually interfollicular‐reactive immunoblasts resembling relatively small mononuclear Hodgkin tumour cells. Reactive immunoblasts can show substantial CD30 positivity, albeit at lower levels than HRS cells. In all the 64 cases TARC staining was negative, and none of the biopsied individuals developed cHL in follow‐up. HRS‐like tumour cells can be found in a variety of mature lymphoid neoplasms. Primary mediastinal B cell lymphoma is already known to express TARC, albeit at lower levels than in cHL. Accordingly, we observed weak TARC staining in eight of 10 cases, while the other two cases were negative. B cell lymphoma unclassifiable with features intermediate between DLBCL and cHL showed weak or incomplete TARC staining in all nine cases tested, fitting well with the grey zone nature of this diagnostic category. This suggests that TARC immunohistochemistry can be used to discriminate between B cell lymphomas arising from the mediastinum and other types of large B cell lymphoma. In our study, all seven anaplastic large T cell lymphoma (five ALK‐negative, two ALK‐positive) cases were TARC‐negative, contrasting with data from two older studies that found TARC positivity in 44% of ALK negative (12 of 27) and 4% (one of 27) of anaplastic large T cell lymphomas, respectively. , We used the same antibody, but with another protocol, so this difference might be caused by technical aspects or by a chance effect on case selection. Other mature lymphoid neoplasms are biologically more distinct from cHL and can usually be readily recognised. We included cases in our series that were clearly not cHL, but which harboured at least a few HRS‐like cells. This consisted of a wide variety of entities, as described previously in many case reports and reviews. , We observed strong TARC positivity in EBV‐positive DLBCL (two of 14), disseminated mycosis fungoides (two of two), peripheral T cell lymphoma not otherwise specified with EBV‐positive blasts (one of 14) and chronic lymphocytic leukaemia/small lymphocytic lymphoma with EBV‐positive Hodgkin‐like blasts (one of five). In these cases, the presence of a concurrent cHL was ruled out by considering clinical, morphological and immunophenotypical contexts. However, especially when atypical cells are EBV‐positive, TARC staining should be interpreted with some caution. We report these cases to illustrate that TARC staining, like all other IHC markers used in cHL diagnostics, should always be appreciated in context. In conclusion, TARC IHC positivity of atypical cells, especially in a strong and complete staining pattern, has great value in differentiating between cHL, NLPHL and other lymphomas or reactive lymphadenopathies with cHL‐like features. Implementation in daily diagnostics in more centres will help to delineate its discriminative potential in additional rare cHL‐mimics.
All authors declare that there are no competing (financial) interests in relation to the work described.
|
Evaluation of the (hu)
|
3ddf48dc-39f8-4271-8a71-c693f494e781
|
10100212
|
Forensic Medicine[mh]
|
INTRODUCTION Traditionally, the key components of a forensic profile are age, stature, sex and ancestry (though the value of inclusion of the last is currently being debated in the field; see ). Classically, the pelvis has been preferentially employed for sex estimation. This was widely popularized by Phenice in 1969 showing that assessment of non‐metric traits of the pubis allowed accurate assessment of sex greater than 95% . Since then, other techniques have been developed including analysis of sciatic notch width and shape, along with the anatomy of the tibia, femur, and humerus anatomy [ , , , ]. Aside from postcranial elements, various studies have evaluated the efficacy of cranial features for sex estimation. Among cranial traits, mastoid size, and supraorbital ridge/glabella size appear to be among the most reliable features [ , , , , , ]. Additionally, mandibular traits have been assessed for sexual dimorphism with high degrees of accuracy (>70% [ , , , ]). Sex can also be estimated using metric analyses, exploiting sexually dimorphic size differences in cranial and postcranial elements (e.g., femoral head diameter ). Ancestry estimation utilizes morphoscopic traits (traits that are categorized/scored based upon visual assessment) and/or metric measurements that display variation between geographically or socially distinct groups . Historically, non‐metric approaches first employed in the field had poor standardization and were highly dependent on operators' skill and experience rather than an established scientific method with a high degree of replicability . However, more robust methods have now been developed for identifying ancestry utilizing metric [ , , , , ] and non‐metric means [ , , ]. There is currently vigorous debate in the field of forensic anthropology both regarding the appropriate methods for ancestry or population affinity assessment [ , , , , , ] as well as the validity and utility of this exercise (e.g [ , , ]). Broadly, most sex and ancestry estimation methods using cranial elements rely upon the midface and/or neurocranium. However, situations arise in forensic and bioarcheological contexts where the best preserved (or only) remains may be mandibular. The mandible is an important structure to consider for forensic identification and biological affinity due to the fact that it is a durable bone that tends to remain intact and often well preserved . Many times, the mandible can survive conditions that other bones in the human body cannot due to its robust nature . There has been a movement toward developing easy‐to‐use, validated software‐based approaches for forensic skeletal analysis (e.g., FORDISC , CRANID ; SexEst ), but these software have tended to focus on the cranium or post‐cranial elements. To improve ease of evaluating a forensic profile from mandibular material, Berg and Kenyhercz developed the (hu)MANid program, aimed at allowing sex and ancestry estimation using only mandibular features. The (hu)MANid program was developed by Berg and Kenyhercz using RStudio's framework Shiny version 0.13.2 . It is a free, web‐based program that utilizes metric and morphoscopic data to evaluate sex and ancestry. The program was built from a worldwide sample of mandibular data that can be used as reference to classify mandibles for ancestry and sex. This reference dataset is compiled of over 1750 individuals from 14 main populations from a worldwide sample and represents modern, historic, and prehistoric groups, including identified human remains as well as unidentified human remains that had ancestry assessed via osteological analysis. Output data include linear discriminant analysis (LDA), mixture discriminant analysis (MDA), distance from group centroid, and chi‐square probabilities (discussed in detail in ). In the initial 2017 paper, the (hu)MANid program was tested against FORDISC 3.0 , a well‐established forensic application to aid in forensic identification. Both yielded identical estimations of sex and ancestry. For the overall sample, the program correctly predicted sex 83.5% of the time for pooled sex groups, and ancestry 53% of the time for composite ancestry groups. Overall, (hu)MANid yielded the correct sex and ancestry classification for approximately 70% of the tested samples using mixed discriminant analysis and 60% using linear discriminant analysis. These results showed that the program was 3.6–4.2× more likely than chance to correctly classify samples . A recent study by Lynch and Cabo‐Perez examined the accuracy of the (hu)MANid method when applied to 3D surface scans of skeletal remains. They found higher accuracy for measurements taken from physical specimens than from 3D surface scans. In their geographically diverse surface scan sample, they found the average positive predictive power (PPV) for pooled sex to be 60% for males and 82% for females. In terms of ancestry, they divided their data based upon the relevant (hu)MANid reference group with the composite reference group yielding 28% correct classification, the modern reference group yielding 27% correct, and the historic/prehistoric reference group 23% . To date, the (hu)MANid program has been investigated only in adult samples. While there are obvious barriers to sex estimation prior to puberty, there is some evidence to suggest that it may yet be possible depending on traits examined and age of the juveniles (e.g ). Furthermore, many traits which differ in frequency across populations arise early in development , some even prenatally . Third molar development, the typical choice for estimating adult status from an isolated mandible, is highly variable and suffers from high error rates in age estimation, while 20%–40% of individuals show agenesis of one or more third molar [ , , ]. It is therefore relevant to consider whether the (hu)MANid program is robust to use in samples with completed second molar development but absent third molars. In this study, we seek to ascertain the utility of the (hu)MANid program when applied to computed tomography (CT) scan data from a diverse, contemporary population. Additionally, we seek to examine whether the use of the (hu)MANid program can be extended to adolescents. If so, this would open up another avenue for sex/ancestry estimation in a vulnerable population and would provide assurance to operators dealing with isolated mandibles of uncertain adult status.
MATERIALS AND METHODS Our sample is comprised of cone‐beam computed tomography (CBCT) scans from a clinical setting, allowing us to obtain a large, diverse sample of contemporary individuals. The sample was collected utilizing a cross‐sectional review of the University of Illinois Chicago dental charts in order to identify subjects with CBCT scans who met our inclusion/exclusion criteria (IRB: 2019‐0399). The analysis pulled patients across the College of Dentistry who had CBCT codes from between 1/1/2008 and 1/1/2019, were aged 8–17 (pediatric sample) or 20–45 (adult sample) years of age and had available age, sex, race, and ethnicity data. These scans were collected from electronic health records from the Departments of Orthodontics, Oral Surgery, and Periodontics at University of Illinois Chicago. The exclusion criteria were subjects who reported multiple races or ethnicities; duplicate scans of the same individual; factors that would interfere with mandibular analysis such as trauma to the mandible, craniofacial abnormalities that affect the mandible, medical and/or dental conditions that influence jaw shape or gross image distortion of mandible in the CBCT. Subjects who met all other criteria, but when categorized into ancestry groups that lacked sufficient numbers for comparison were also excluded. The initial sample was screened for CBCT quality and electronic health record sex and self‐reported social “race” and “ethnicity” (Hispanic/Non‐Hispanic). Of the patients that qualified, these patients' sex, age, racial, and ethnicity data were recorded. At check‐in, patients self‐reported “race” (Asian; Black; White) and whether they identified as Hispanic, which were recorded in their electronic health records. To align with the ancestry group outputs for (hu)MANid, subjects were coded as “Asian” if they were Asian non‐Hispanic, “Black/African‐American” if they were Black non‐Hispanic, “Hispanic” if they were White Hispanic, and “White” if they were White non‐Hispanic. After groupings were completed, the CBCT data were exported from the EHR and imported into 3D Slicer freeware (BHW, Boston, MA, USA) for metric and morphometric analysis. Prior work has suggested high accuracy and reliability of CT scan data relative to physical samples (e.g [ , , , ]; but see ). Studies have demonstrated that equivalent levels of accuracy of sex prediction can be achieved through the application of non‐metric traits to CT scan samples as what has been documented in dry skeletal material . Stull and colleagues demonstrated that linear measurements from CT scans of human remains generally had 2 mm or less of variation between the scans and measurements taken directly on the skeletal elements. Of particular relevance, Simmons‐Ehrhardt and colleagues examined accuracy of sex and ancestry estimations using clinical CT scans found high accuracy aside from a limited number of measurements (orbital breadth, nasal height, frontal, and parietal chords) and concluded that CT scan‐derived virtual models are “comparable” to dry bone measurement data. Given this, a CT scan‐based approach was deemed justified. 2.1 Adult sample 384 CBCT scans were initially identified from the University of Illinois Chicago electronic health system utilizing electronic health record codes. Of these, 143 samples fit the inclusion and exclusion criteria. These subjects' CBCT scans were measured according to the guidelines of the (hu)MANid program. The sample was broken down into four ancestry groupings, based upon self‐reported data: African American/Black, Asian, Hispanic (White), and (Non‐Hispanic) White. These were further subdivided into male and female subsets. Subjects identifying as non‐Hispanic with Black/African American race were assigned to Black ( n = 18). Individuals who identified as Asian were categorized as Asian ( n = 10). Subjects identifying as Hispanic and White were assigned to Hispanic ( n = 60). Subjects who reported non‐Hispanic ethnicity and White were assigned to White ( n = 55). The sex breakdown of our sample was n = 62 Female and n = 46 Male (Table ). 2.2 Adolescent sample Seven hundred nine total potential subjects were reviewed for selection. One hundred ten subjects fit all original inclusion and exclusion criteria, of which ( n = 40) fell into the target age range (15–17). This age range was selected due to the fact that most individuals will have completed second molar root development by approximately 15 years of age ; dental age estimation from this point onward relies upon third molar development, which is absent in some individuals [ , , ]. Measurements and analyses were completed for all 40 (21 female, 19 male) eligible subjects. We included the three largest ancestry groups: White (non‐Hispanic; n = 7); Hispanic (White; n = 14); Black/African American (non‐Hispanic; n = 19). Measurements were conducted on the mandibles as per (hu)MANid website definitions and diagrams section, whenever possible. Eleven morphometric measurements and six morphoscopic measurements were recorded for each subject from their CBCT scan. The morphometric measurements included were chin height, height of the mandibular body at the mental foramen, bicondylar width, minimum ramus height, maximum ramus height, mandibular length, mandibular angle, mandibular body breadth at the mental foramen, mandibular body breadth at the M2/M3 junction, and dental arcade width at the third molar (see Ref. for detailed descriptions). The morphoscopic descriptions recorded were chin shape, lower border of the mandible, ascending ramus profile, gonial angle flare, posterior ramus edge inversion, and presence of mandibular tori . Each morphoscopic description is given a score corresponding to the various shapes present in the (hu)MANid sample. Most of the measurements rely on standard osteometric landmarks (e.g., chin height which is measured from infradentale to gnathion) and can be easily collected on 3D models using standard software such as 3D Slicer, Landmark Editor, Osirix, etc. To measure MAN (mandibular angle), the mandibular model was oriented to a lateral view with the inferior border of the body of the mandible oriented horizontal to the bottom edge of the screen using a ruler. A protractor was held up to the screen, aligned with the inferior border of the mandible, and the arm was extended to condylion, allowing the investigator to record mandibular angle. Similarly, most categorical macroscopic variables utilized by the program can be scored from scans following standard protocols (e.g., chin shape, where the dry bone or scan are oriented such that they are seen from a superior view to assess whether the mental surface is blunt, pointed, square, or bilobated) while one macroscopic trait (LBM: lower border of the mandible) also required a modified approach. Documentation for (hu)MANid describes the protocol for assessing LBM as requiring that the dry bone be set on a flat surface and that pressure be briefly applied to the anterior dentition to see if the mandible will “rock” on the surface . If the mandible only rocks forward, this is classified as a “partial rocker,” if it rocks forward and back it is a “rocker.” With 3D data, we could not attempt this. Instead, we focused on the (hu)MANid descriptions of the lower border of the mandibular body and comparison with the example images provided. Mandibles were oriented in lateral view and were aligned horizontal to the bottom of the screen. If the lower border was mostly flat, the mandible was scored as “straight”; it there was a notable concavity, usually towards gonion, the mandible was scored as “undulating”; if the border was raised anteriorly at the chin and sloped posteriorly but then was mostly flat, it was scored “partial rocker”; if the border was convex, it was scored as a “rocker”. The morphoscopic and morphometric measurements recorded were entered into the (hu)MANid program: https://anthropologyapps.shinyapps.io/humanid/ to generate estimated ancestry and sex, as well as posterior probability thereof. Reference groups were chosen to match with the characteristics of the sample. Linear discriminant analysis (LDA) was selected in the program, due both to the fact that this is the default (and thus presumably most commonly used) option and also due to reported bias in sex estimation in the MDA model . The (hu)MANid's previously reported LDA correct classifications for ancestry and sex is approximately 60% (range: 59.2%–62.1%; see Ref. ; table 5.9). For this reason, our study used 60% correct classification as a benchmark for comparison for combined sex and ancestry estimation. To validate intra‐examiner reliability, 10 CBCT scans were measured for morphometric and morphoscopic measurements and were re‐measured 2 weeks later. Inter‐operator reliability scores were analyzed via interclass correlations analysis. For the adult sample, Cohen's Kappa values for the categorical variables were all approximately 1.0; interclass correlation coefficients for the metric variables ranged from 0.925 to 1.0. For the adolescent sample, two measures had Cohen's Kappa values lower than we scored less than 0.80 (Gonial Angle Flare and Posterior Edge Inversion); for our continuous data, only one measurement yielded an interclass correlation coefficient that was less than 0.80 (Mandibular Body Breadth at the M2/M3 Junction [TML23]). Though we note that our interclass correlation coefficient for TML23 (0.703) is very similar to the pooled interclass correlation coefficient “absolute agreement” value (0.710) found in Byrnes et al. . using adult dry bone data. Adolescent and adult datasets were analyzed separately. Chi‐squared tests were conducted to examine the association between the self‐reported vs. predicted ancestry. For the adult sample, these were conducted with and without Asian subsample which had minimal contribution ( n = 10) to the total study population. Kruskal–Wallis tests were conducted to examine whether any of the predicted weightings for race/ancestry (Black female, Black male, Hispanic female, Hispanic male, White female, White male) differed in a statistically significant manner across actual reported ancestry groups. To estimate accuracy of sex prediction we computed the number of individuals correctly predicted to be a particular sex and used Chi‐squared analysis to compare this prediction with the percent of individual who would be assigned male/female by pure chance (50:50). Following Lynch and Cabo‐Perez , we also include posterior predictive values (PPV). Thus, presented below are “% Correct” values which, for example, answer the question, “How many “actual” (self‐reported) females did the program identify as female?”; and “% Posterior Predicted Values” which, for example, answer the question, “Of the individuals estimated by the program to be female, how many are “actual” (self‐reported) females?” (see Ref. for an excellent summary of the utility of PPV in forensic contexts).
Adult sample 384 CBCT scans were initially identified from the University of Illinois Chicago electronic health system utilizing electronic health record codes. Of these, 143 samples fit the inclusion and exclusion criteria. These subjects' CBCT scans were measured according to the guidelines of the (hu)MANid program. The sample was broken down into four ancestry groupings, based upon self‐reported data: African American/Black, Asian, Hispanic (White), and (Non‐Hispanic) White. These were further subdivided into male and female subsets. Subjects identifying as non‐Hispanic with Black/African American race were assigned to Black ( n = 18). Individuals who identified as Asian were categorized as Asian ( n = 10). Subjects identifying as Hispanic and White were assigned to Hispanic ( n = 60). Subjects who reported non‐Hispanic ethnicity and White were assigned to White ( n = 55). The sex breakdown of our sample was n = 62 Female and n = 46 Male (Table ).
Adolescent sample Seven hundred nine total potential subjects were reviewed for selection. One hundred ten subjects fit all original inclusion and exclusion criteria, of which ( n = 40) fell into the target age range (15–17). This age range was selected due to the fact that most individuals will have completed second molar root development by approximately 15 years of age ; dental age estimation from this point onward relies upon third molar development, which is absent in some individuals [ , , ]. Measurements and analyses were completed for all 40 (21 female, 19 male) eligible subjects. We included the three largest ancestry groups: White (non‐Hispanic; n = 7); Hispanic (White; n = 14); Black/African American (non‐Hispanic; n = 19). Measurements were conducted on the mandibles as per (hu)MANid website definitions and diagrams section, whenever possible. Eleven morphometric measurements and six morphoscopic measurements were recorded for each subject from their CBCT scan. The morphometric measurements included were chin height, height of the mandibular body at the mental foramen, bicondylar width, minimum ramus height, maximum ramus height, mandibular length, mandibular angle, mandibular body breadth at the mental foramen, mandibular body breadth at the M2/M3 junction, and dental arcade width at the third molar (see Ref. for detailed descriptions). The morphoscopic descriptions recorded were chin shape, lower border of the mandible, ascending ramus profile, gonial angle flare, posterior ramus edge inversion, and presence of mandibular tori . Each morphoscopic description is given a score corresponding to the various shapes present in the (hu)MANid sample. Most of the measurements rely on standard osteometric landmarks (e.g., chin height which is measured from infradentale to gnathion) and can be easily collected on 3D models using standard software such as 3D Slicer, Landmark Editor, Osirix, etc. To measure MAN (mandibular angle), the mandibular model was oriented to a lateral view with the inferior border of the body of the mandible oriented horizontal to the bottom edge of the screen using a ruler. A protractor was held up to the screen, aligned with the inferior border of the mandible, and the arm was extended to condylion, allowing the investigator to record mandibular angle. Similarly, most categorical macroscopic variables utilized by the program can be scored from scans following standard protocols (e.g., chin shape, where the dry bone or scan are oriented such that they are seen from a superior view to assess whether the mental surface is blunt, pointed, square, or bilobated) while one macroscopic trait (LBM: lower border of the mandible) also required a modified approach. Documentation for (hu)MANid describes the protocol for assessing LBM as requiring that the dry bone be set on a flat surface and that pressure be briefly applied to the anterior dentition to see if the mandible will “rock” on the surface . If the mandible only rocks forward, this is classified as a “partial rocker,” if it rocks forward and back it is a “rocker.” With 3D data, we could not attempt this. Instead, we focused on the (hu)MANid descriptions of the lower border of the mandibular body and comparison with the example images provided. Mandibles were oriented in lateral view and were aligned horizontal to the bottom of the screen. If the lower border was mostly flat, the mandible was scored as “straight”; it there was a notable concavity, usually towards gonion, the mandible was scored as “undulating”; if the border was raised anteriorly at the chin and sloped posteriorly but then was mostly flat, it was scored “partial rocker”; if the border was convex, it was scored as a “rocker”. The morphoscopic and morphometric measurements recorded were entered into the (hu)MANid program: https://anthropologyapps.shinyapps.io/humanid/ to generate estimated ancestry and sex, as well as posterior probability thereof. Reference groups were chosen to match with the characteristics of the sample. Linear discriminant analysis (LDA) was selected in the program, due both to the fact that this is the default (and thus presumably most commonly used) option and also due to reported bias in sex estimation in the MDA model . The (hu)MANid's previously reported LDA correct classifications for ancestry and sex is approximately 60% (range: 59.2%–62.1%; see Ref. ; table 5.9). For this reason, our study used 60% correct classification as a benchmark for comparison for combined sex and ancestry estimation. To validate intra‐examiner reliability, 10 CBCT scans were measured for morphometric and morphoscopic measurements and were re‐measured 2 weeks later. Inter‐operator reliability scores were analyzed via interclass correlations analysis. For the adult sample, Cohen's Kappa values for the categorical variables were all approximately 1.0; interclass correlation coefficients for the metric variables ranged from 0.925 to 1.0. For the adolescent sample, two measures had Cohen's Kappa values lower than we scored less than 0.80 (Gonial Angle Flare and Posterior Edge Inversion); for our continuous data, only one measurement yielded an interclass correlation coefficient that was less than 0.80 (Mandibular Body Breadth at the M2/M3 Junction [TML23]). Though we note that our interclass correlation coefficient for TML23 (0.703) is very similar to the pooled interclass correlation coefficient “absolute agreement” value (0.710) found in Byrnes et al. . using adult dry bone data. Adolescent and adult datasets were analyzed separately. Chi‐squared tests were conducted to examine the association between the self‐reported vs. predicted ancestry. For the adult sample, these were conducted with and without Asian subsample which had minimal contribution ( n = 10) to the total study population. Kruskal–Wallis tests were conducted to examine whether any of the predicted weightings for race/ancestry (Black female, Black male, Hispanic female, Hispanic male, White female, White male) differed in a statistically significant manner across actual reported ancestry groups. To estimate accuracy of sex prediction we computed the number of individuals correctly predicted to be a particular sex and used Chi‐squared analysis to compare this prediction with the percent of individual who would be assigned male/female by pure chance (50:50). Following Lynch and Cabo‐Perez , we also include posterior predictive values (PPV). Thus, presented below are “% Correct” values which, for example, answer the question, “How many “actual” (self‐reported) females did the program identify as female?”; and “% Posterior Predicted Values” which, for example, answer the question, “Of the individuals estimated by the program to be female, how many are “actual” (self‐reported) females?” (see Ref. for an excellent summary of the utility of PPV in forensic contexts).
RESULTS 3.1 Adult sample All ancestry and sex groups within the study population were de‐identified and grouped in a complete population pool for CBCT analysis and measurement. Once measurements were completed, data were input into (hu)MANid software for sex and race prediction. Predictive results from the program were compared with self‐reported data and analyzed for accuracy. The (hu)MANid program's overall sex classification accuracy was on par with forensic standards within the field at >70%–80% (Table ; Figure ). Posterior predictive values (PPV) indicate that if an individual was predicted to be female, they were very likely to be a female (96.6%) whereas the opposite was true for predicted males (Table ). A Chi‐squared test comparing reported and predicted sex found a statistically significant association ( p < 0.001). (hu)MANid's overall ancestry classification accuracy was 33.56% (34.59% without the Asian subsample) with two ancestry groups showing classification accuracies at or below 20% (Table ). In our sample, predictive accuracy for ancestry was not statistically distinguishable from the level of accuracy that would be expected by chance ( p = 0.119) using Chi‐squared goodness of fit testing. Posterior predictive values reveal relatively low rates of accuracy (28.93%–40.48%), with the Asian category as a notable outlier as all individuals that were categorized Asian ( n = 2) were Asian (Table ). When examined, generally our data shows that the most common sex/ancestry predictions were Hispanic female followed by Hispanic male, while the most common actual identities were Hispanic female followed by White female and White male (Figure ). We compared the (hu)MANid‐generated posterior probability scores (likelihood of an individual being a particular ancestry/sex) by actual self‐reported ancestry group. Posterior probability scores for “White male” showed significantly significant differences across reported groups ( p = 0.002) while the Black female prediction scores were approaching significance ( p = 0.055). Thus, only the White male group showed a clear tendency toward accurate ancestry prediction. This is likely largely driven by the high accuracy of sex prediction, as the overall greatest ancestry prediction was for Hispanic (50%). 3.2 Adolescent sample Sex prediction did not exceed the level achievable by chance (Table ). In terms of ancestry prediction, we see higher correct classifications percentages than in the adult sample (total combined ancestry percent correct classification: 47.5%; Table ). The White ancestry group had the highest combined prediction accuracy at 71.43%, but the lowest posterior predictive value (35.71%).
Adult sample All ancestry and sex groups within the study population were de‐identified and grouped in a complete population pool for CBCT analysis and measurement. Once measurements were completed, data were input into (hu)MANid software for sex and race prediction. Predictive results from the program were compared with self‐reported data and analyzed for accuracy. The (hu)MANid program's overall sex classification accuracy was on par with forensic standards within the field at >70%–80% (Table ; Figure ). Posterior predictive values (PPV) indicate that if an individual was predicted to be female, they were very likely to be a female (96.6%) whereas the opposite was true for predicted males (Table ). A Chi‐squared test comparing reported and predicted sex found a statistically significant association ( p < 0.001). (hu)MANid's overall ancestry classification accuracy was 33.56% (34.59% without the Asian subsample) with two ancestry groups showing classification accuracies at or below 20% (Table ). In our sample, predictive accuracy for ancestry was not statistically distinguishable from the level of accuracy that would be expected by chance ( p = 0.119) using Chi‐squared goodness of fit testing. Posterior predictive values reveal relatively low rates of accuracy (28.93%–40.48%), with the Asian category as a notable outlier as all individuals that were categorized Asian ( n = 2) were Asian (Table ). When examined, generally our data shows that the most common sex/ancestry predictions were Hispanic female followed by Hispanic male, while the most common actual identities were Hispanic female followed by White female and White male (Figure ). We compared the (hu)MANid‐generated posterior probability scores (likelihood of an individual being a particular ancestry/sex) by actual self‐reported ancestry group. Posterior probability scores for “White male” showed significantly significant differences across reported groups ( p = 0.002) while the Black female prediction scores were approaching significance ( p = 0.055). Thus, only the White male group showed a clear tendency toward accurate ancestry prediction. This is likely largely driven by the high accuracy of sex prediction, as the overall greatest ancestry prediction was for Hispanic (50%).
Adolescent sample Sex prediction did not exceed the level achievable by chance (Table ). In terms of ancestry prediction, we see higher correct classifications percentages than in the adult sample (total combined ancestry percent correct classification: 47.5%; Table ). The White ancestry group had the highest combined prediction accuracy at 71.43%, but the lowest posterior predictive value (35.71%).
DISCUSSION In the original paper introducing (hu)MANid, Berg and Kenyherz reported strong accurate sex and ancestry prediction using their program. First, they verified (hu)MANid's computational accuracy relative to FORDISC 3.0 and SPSS version 22.0 . They reported that all three programs produced identical results with total correct sex classification >83.5% for pooled sexes and total correct ancestry classification of 53% for composite ancestry groups . Lynch and Cabo‐Perez applied the (hu)MANid method to 3D surface scans of mandibles. Using posterior predictive values, they found sex‐predication accuracy ranging from 60% to 82% and ancestry‐prediction accuracy ranging from 7% to 26% . Additionally, a published abstract attempting to validate the (hu)MANid program in a population of recent American Black, recent American White, recent Portuguese, medieval Nubian, and prehistoric Native American populations ( n = 505) found an average correct sex classification of 74% and a correct ancestry classification ranging from 8% to 45% with combined sex and ancestry correct classifications between 3% and 41% . In our diverse adult CT scan‐derived sample, we show similar results to prior published literature on the accuracy of the (hu)MANid program as a forensic tool for sex estimation at 76% overall accuracy, with greater accuracy for males (95.83%) as compared to females (65.26%). When approached from the perspective of how many individuals the program estimated to be female were “actual” self‐reported females/how many individuals the program estimated to be male were “actual” self‐reported males, the frequencies flip (females: 96.6%, males: 58.1%). Thus, the program is estimating self‐reported females to be male much more frequently than the reverse. Similarly, Lynch and Cabo‐Perez found that in their 3D surface scan (laser scan) derived sample the PPV for females was much higher (92%) than for males (60%), though the opposite was true for their much smaller ( n = 41, 8 female) dry bone sample (female PPV: 75%, male: 97%). It should be noted that prior work has tended to find that CT scan‐derived data may suffer from slight magnification, with differences in measurements between dry and CT data tending to show larger CT values as well as larger average CT values in comparison with reference means . These differences were mostly small (<1% on average) but may be a potential explanation for why our data shows higher accuracy of estimation for those who are self‐reported males than self‐reported females, as size is an important component of sexual dimorphism. Current published data by the team which created (hu)MANid does not break out sex estimation by group (female/male) independent of ancestry; additional comparative data employing this program using a dry bone sample is warranted. Similar to Lynch and Cabo‐Perez and Lynch et al. , we found a much lower rate of correct ancestry prediction for our adult sample than indicated in the Berg and Kenyhercz paper (53% pooled ancestry). In our sample, the overall ancestry correct classification rate was <35% and individual groups identifying as White or Asian were correctly identified <20% of the time. Aside from the Asian group, our posterior predictive values were roughly between 30 and 40%, which are in fact higher values than those seen in Lynch and Cabo‐Perez (7%–26%). In our sample, the (hu)MANid ancestry estimations yielded inaccurate results nearly twice as often as correct predictions. Based upon the results in our adult sample, we would caution use of (hu)MANid for ancestry estimation in CT scan samples. Further work will be needed to determine whether this reflects sampling (3D surface scans/CT scans vs. dry bones) or a limitation of the (hu)MANid application itself. Importantly, we did find sex estimation at a sufficient level of accuracy for forensic utility (>70%–80%). For the 20%–40% of individuals with agenesis of one or more third molar [ , , ], it may be difficult to distinguish an older adolescent from a young adult on the basis of an isolated mandible. Thus, it is valuable to know whether (hu)MANid can produce similar accuracy in older adolescents as adults. Our data show low accuracy (45%) for sex estimation in adolescents (age 15–17), using CT scan data. It is particularly interesting that nearly half the samples predicted by the program to be male were in fact female. The reverse, the prediction of female for self‐identified adolescent males, is less surprising as size is a meaningful component in sexual dimorphism and males experience a greater magnitude of mandibular growth over a longer, later duration . This would appear to suggest that ongoing changes to mandibular shape in late adolescence, in both males and females, may be an important consideration for sex estimation in this age group. Somewhat unexpectedly, the (hu)MANid program produced ancestry estimation rates for our adolescent sample that were both higher than the adult accuracies an on par with the previously reported accuracy rates of Berg and Kenyhercz of roughly 53% (percent correct classification: 33.96%–60%) compared with our adult sample (19.3%–50%). Given the small ancestry subgroup sizes in the adolescent sample, however, we suggest caution in over‐interpreting these results given that the observed higher values may be stochastic, though further work is warranted. As the program currently stands, our data suggest using caution when applying the (hu)MANid program in any form (i.e., sex or ancestry) to individuals of unknown/unclear adult status, at least when using CT data. Our study highlights the difficulty of ancestry estimation, especially with regard to self‐reported, retrospective ancestry data. Currently, within the field there is an ongoing dialogue regarding the use of “ancestry” estimations (e.g ). Ancestry estimation is increasingly drawing scrutiny within the forensic community due to issues such as application of reference groups [ , , , ]. In our study, we grouped our sample into “White,” “Hispanic,” and “Black” to mirror potential output categories from (hu)MANid. This method of grouping fails to accurately portray the diversity within those that would self‐identify within each of these groups , and does not allow for more detailed, population‐level approaches as is argued for in a population affinity approach . The population affinity approach is aimed at more thoroughly delving into the multifactorial influence of climate, as well as various evolutionary and biological processes that lead to human variation [ , , ]. An example of a challenge within the current structure is “Hispanic/Latinx” as a category of “ancestry.” An individual identifying as Hispanic is often self‐identifying based on various, and varied, social constructs of this term [ , , , ]. However, this individual may be from a large variety of communities whose populations may display significantly distinct social and physical characteristics . The growing number of other studies showing relatively low accuracy in predicting “ancestry” using (hu)MANid may support a more thorough assessment of an individual's background and comparison to highly specific reference groups, such as is argued by Ross and Pilloud . One notable limitation of the current study is that the (hu)MANid program utilizes reference data from physical mandibular specimens and was therefore developed utilizing physical instruments such as calipers and mandibulometers. However, due to the nature of our study, CBCT data were analyzed virtually using volume rendering software (3D Slicer). While scaling was accounted for, the nature of our virtual analysis on 3D Slicer differs from the reference data and presents a potential area of error in our sample. We note above ways in which we modified data collection methods to suit this data type. In particular, it is a challenge to recreate mandibulometer measurements in 3D, though Garvin and Severa demonstrate a method that we would advocate be employed in future such research. Furthermore, a recent paper found that inclusion or exclusion of the mandibulometer measurements did not appear to affect outcomes, thus another option would be to exclude these measurements from future work that relies upon 3D scans/CT scans. It is also important to consider bias in the sample; many patients presenting to the Orthodontics and Oral Surgery clinics at University of Illinois Chicago are referred due to orthopedic case severity. These individuals may require surgical intervention for their oral health needs due to maxillary or mandibular growth variations. This was not quantified in our study and may skew the mandibular measurements as the population may be outliers relative to true population norms. On the other hand, using a sample that includes individuals with malocclusion (even severe) does reflect the range of diversity found in the general population. Finally, our sex, racial, and ethnic (ancestry) data were taken from the electronic health records of patients retrospectively. This self‐reported data may not fully capture the nature of an individual's true heritage. “Racial” and “ethnic” (Hispanic Yes/No) identification/groupings are gross oversimplifications of an individual's genetic ancestry and are not necessarily biologically representative. Furthermore, due to the nature of a highly diverse population, these singular identification techniques do not necessarily capture the potential extent of admixture within the population . The current study demonstrates the value of the (hu)MANid program for forensic analysis and prediction of sex utilizing mandibular samples, or even CT imagery. However, the program's current predictive model does not appear to be adequate for accurate assessment of ancestry in our diverse Chicagoland sample or may be inadequate when using CT scan data. Future studies should be conducted assessing other diverse groups to verify these results. As CBCT analysis continues to become more widely utilized in the medical and dental fields, there are increasingly options for very large, diverse datasets of known provenience. Utilization of larger samples from clinical settings could help hone the algorithms underlying this program to potentially allow for more accurate prediction. In conclusion, our results reveal that the (hu)MANid program can be applied to CT scan‐derived data to achieve similar accuracy of sex estimation as seen using dry bone for adults. In our adult sample, we see reduced utility for estimation of ancestry. We posit this is likely a combination of limitations related to measurement/scoring in a virtual setting and perhaps also the nature of our sample being diverse, contemporary individuals. This is supported by the fact that other works using the (hu)MANid method have also found relatively lower accuracy for ancestry estimation than the originally published values . This may be due to limitations of the comparative samples that the program was developed from and might be improved through increasing the size and diversity of the dataset used to create the model underlying this application. We fail to demonstrate utility of the program for adolescent CT data, particularly for sex estimation, and caution its use in these contexts, though further work is needed to determine whether application to dry samples would increase accuracy. Our results further support the movement within forensic anthropology away from traditional, race‐based “ancestry” categorization toward a more nuanced understanding of human biological variation rooted in evolutionary frameworks. The (hu)MANid program now has evidence in support of its use for sex estimation using dry bones, surface scans, and clinical CT scans, increasing the range of potential use in forensic and bioarcheological contexts.
IRB: #2019‐0399.
|
Methods for preparing tissue microarray slides using xenografts with different levels of HER2 expression to standardize HER2 detection
|
63ce5afd-181e-456d-aa8a-51d6def21ec7
|
10100237
|
Anatomy[mh]
|
Human epidermal growth factor receptor type 2 (HER2) is a prognostic indicator of breast, gastric, salivary gland, and colorectal cancers. Against HER2‐positive advanced/recurrent gastric cancer, trastuzumab was shown to improve overall survival when used in combination with standard chemotherapy in the ToGA trial. To appropriately utilize such HER2‐targeting drugs, histopathological determination of HER2 status before treatment is crucial. Recent guidelines for breast cancer and gastroesophageal cancer include sections on quality control, such as how to conduct quality control testing in laboratories, , indicating that further improvements to testing accuracy are needed. Thus, standardization with control slides is required to obtain good staining results. Several studies have reported on control slides which use clinical breast carcinoma tissues, , cell lines, , or peptides. While control slides using cell lines are commonly utilized and are still useful, when examining clinical samples with heterogeneous histology, such as gastric cancer, control slides that better reflect the clinical histology may be more valuable, particularly tissue microarray (TMA) from xenografts with host stroma. However, there are no reported methods for preparing and stably supplying such TMA slides as staining controls for HER2 testing. The aim of this study was to provide a TMA composed of xenografted tumor that can be used for more comprehensive evaluations, including of tissue structure, rather than simply checking for positives and negatives. We established a protocol for preparing TMA control slides in order to further standardize HER2 staining conditions in clinical practice.
Preparation of TMA slide for HER2 testing In advance, the HER2 immunohistochemistry (IHC) score and HER2 gene amplification of the human gastric cancer cell lines NCI‐N87, SCH, MKN‐74, and MKN‐45, which are used to prepare TMA slides for HER2 detection, were confirmed by performing IHC and fluorescence in situ hybridization (FISH) at SRL (SRL, Inc.). For the cell lines used for TMA, xenograft tumor masses were prepared in advance in NOG mice (NOD/Shi‐scid, IL‐2RγKO Jic, In‐Vivo Science Inc.), and the most appropriate tumor line was selected based on the IHC and FISH results. All four cell lines (1 × 10 7 cells) were implanted subcutaneously in 6‐ to 8‐week‐old female NOG mice. All animal procedures and experiments were conducted with the approval of the Committee for Animal Experimentation of the National Cancer Center, Japan (approval No. K13‐004). After these cell lines were implanted and grown to a diameter of 1 cm, they were harvested. Immediately after harvesting, the paraffin block was prepared by fixing with 10% neutral‐buffered formalin for 24 h as recommended by the guideline. , , Paraffin blocks for tumors formed from each cell line were sectioned into three cores with diameters of 3 mm, and again embedded in a single block. Sections of 4‐μm thickness were prepared as TMA slides for HER2 detection (Figures and ). IHC of HER2 In this study, immunostaining was performed using six in vitro diagnostic kits from four manufacturers that are commonly used in clinical practice for immunostaining of HER2 protein, as shown in Table . Immunostaining was performed according to the instructions provided with each kit. Manual immunostaining was performed using the Dako and Histofine kits, for which manual methods were available. For the other kits, we used the automatic immunostaining device (Bond, BOND‐MAX; Ventana, BenchMark) designated by each company. Stability of TMA slide To assess the stability of TMA slides, the sliced TMA slides were stored for 6 months. Slide specimens were stored in a refrigerator after being coated with paraffin (Para‐Mate, Kaken Geneqs, Inc.). After 6 months of storage, unstained TMA slides were subjected to IHC using the Dako HercepTest II, and the stainability of HER2 in each tumor strain was compared to the specimens stained 6 months earlier. Evaluation of HER2 immunostaining properties HER2 scoring was performed under light microscopy in accordance with the ToGA trial criteria (Supporting Information: Supplementary Table ). The evaluation was conducted under the consensus of two pathologists.
In advance, the HER2 immunohistochemistry (IHC) score and HER2 gene amplification of the human gastric cancer cell lines NCI‐N87, SCH, MKN‐74, and MKN‐45, which are used to prepare TMA slides for HER2 detection, were confirmed by performing IHC and fluorescence in situ hybridization (FISH) at SRL (SRL, Inc.). For the cell lines used for TMA, xenograft tumor masses were prepared in advance in NOG mice (NOD/Shi‐scid, IL‐2RγKO Jic, In‐Vivo Science Inc.), and the most appropriate tumor line was selected based on the IHC and FISH results. All four cell lines (1 × 10 7 cells) were implanted subcutaneously in 6‐ to 8‐week‐old female NOG mice. All animal procedures and experiments were conducted with the approval of the Committee for Animal Experimentation of the National Cancer Center, Japan (approval No. K13‐004). After these cell lines were implanted and grown to a diameter of 1 cm, they were harvested. Immediately after harvesting, the paraffin block was prepared by fixing with 10% neutral‐buffered formalin for 24 h as recommended by the guideline. , , Paraffin blocks for tumors formed from each cell line were sectioned into three cores with diameters of 3 mm, and again embedded in a single block. Sections of 4‐μm thickness were prepared as TMA slides for HER2 detection (Figures and ).
In this study, immunostaining was performed using six in vitro diagnostic kits from four manufacturers that are commonly used in clinical practice for immunostaining of HER2 protein, as shown in Table . Immunostaining was performed according to the instructions provided with each kit. Manual immunostaining was performed using the Dako and Histofine kits, for which manual methods were available. For the other kits, we used the automatic immunostaining device (Bond, BOND‐MAX; Ventana, BenchMark) designated by each company.
To assess the stability of TMA slides, the sliced TMA slides were stored for 6 months. Slide specimens were stored in a refrigerator after being coated with paraffin (Para‐Mate, Kaken Geneqs, Inc.). After 6 months of storage, unstained TMA slides were subjected to IHC using the Dako HercepTest II, and the stainability of HER2 in each tumor strain was compared to the specimens stained 6 months earlier.
HER2 scoring was performed under light microscopy in accordance with the ToGA trial criteria (Supporting Information: Supplementary Table ). The evaluation was conducted under the consensus of two pathologists.
The characteristics of the human gastric cancer cell lines The IHC and FISH results preliminarily confirmed that the properties of each gastric cancer cell line are suitable for the preparation of TMA blocks for HER2 detection (Table ). Staining results with the various diagnostic kits Staining results are shown in Table , and Figures and . By observing the three cores of each cell line, we were able to make accurate judgments (Figures and ). In Dako HerpepTest II, the TMA slides for HER2 testing in this study showed good staining with no nonspecific reactions for mouse‐derived components (connective tissues or blood components) in any of the tumor cell lines. Especially in the NCI‐N87 tumor (3+) and SCH tumor (2+) cell lines, the contrast between the host stromal tissue and HER2 positive tumor cells was clear and heterogeneity was also observed in the histological images, similar to actual clinical samples (Figure ). In addition, the Histofine HER2 kit (POLY), Ventana I‐View, and ultraView Pathway HER2 showed almost the same results (Supporting Information: Supplementary Figures – ). Histofine HER2 (MONO) showed similar stainability to Dako HercepTest II in NCI‐N87 (3+), SCH (2+), and MKN‐45 (0) (Figure ). However, it was difficult to determine the staining intensity of 1 + (<10%) in the MKN‐74 cell line (Figure ). The bond polymer system HER2 test also gave similar results (Supporting Information: Supplementary Figure ). Stability of TMA slide According to the HER2‐positive images, TMA slides stored for 6 months maintained similar staining in all tumor cell lines compared to the initially stained TMA slides (Supporting Information: Supplementary Figure ).
The IHC and FISH results preliminarily confirmed that the properties of each gastric cancer cell line are suitable for the preparation of TMA blocks for HER2 detection (Table ).
Staining results are shown in Table , and Figures and . By observing the three cores of each cell line, we were able to make accurate judgments (Figures and ). In Dako HerpepTest II, the TMA slides for HER2 testing in this study showed good staining with no nonspecific reactions for mouse‐derived components (connective tissues or blood components) in any of the tumor cell lines. Especially in the NCI‐N87 tumor (3+) and SCH tumor (2+) cell lines, the contrast between the host stromal tissue and HER2 positive tumor cells was clear and heterogeneity was also observed in the histological images, similar to actual clinical samples (Figure ). In addition, the Histofine HER2 kit (POLY), Ventana I‐View, and ultraView Pathway HER2 showed almost the same results (Supporting Information: Supplementary Figures – ). Histofine HER2 (MONO) showed similar stainability to Dako HercepTest II in NCI‐N87 (3+), SCH (2+), and MKN‐45 (0) (Figure ). However, it was difficult to determine the staining intensity of 1 + (<10%) in the MKN‐74 cell line (Figure ). The bond polymer system HER2 test also gave similar results (Supporting Information: Supplementary Figure ).
According to the HER2‐positive images, TMA slides stored for 6 months maintained similar staining in all tumor cell lines compared to the initially stained TMA slides (Supporting Information: Supplementary Figure ).
As a result of this evaluation, tumor tissues with HER2 scores of 3+ or 2+ in the TMA showed adequate staining with all six testing kits. Furthermore, in the score 3+ and 2+ tumor cell lines, the contrast between the host stromal tissue and HER2 positive tumor cells was clear, heterogeneity was observed, and the histological images were similar to those of clinical samples. In addition, sliced TMA slides were confirmed to be stable for 6 months. Thus, the TMA slides prepared in this study can be used as external controls to support the standardization of staining conditions for various kits. Control slides using cell lines have been commonly used as external controls to support the standardization of the staining conditions of various kits. , These are very useful for controlling the quality of the antigen‐antibody reaction of the primary antibody in immunohistochemical staining, but it is not possible to see the effect of the background, such as that of the stromal tissue of the host on the immunostaining result. Therefore, when examining clinical samples with heterogeneous histology, such as gastric cancer, a control slide that better reflects the histology of the clinical sample, such as TMA prepared from xenograft with host stromal tissue, would be very useful. However, with two of the six kits we tested, tumor tissues with a HER2 score of 1+ showed diminished staining. We think the question of how to evaluate 1+ rated tissues is important, so we hope to obtain good 1+ tissues in the future. In addition, NOG mice genetically deficient in B cells and with no detected immunoglobulin G or immunoglobulin M had no nonspecific response derived from mouse components. NOG mice are a very suitable host animal for preparing TMA slides. In conclusion, TMA slides prepared in this study have the potential to standardize staining and improve the accuracy of HER2 detection in clinical practice.
Conception and design of the study : Keigo Yorozu, Kaoru Hashizume, Naoki Harada, Atsushi Ochiai. Acquisition and analysis of data : Keigo Yorozu, Mitsue Kurasawa, Yuki Iino, Yuka Nakamura. Drafting and writing the manuscript and figures : Keigo Yorozu, Naoki Harada. Review and editing of the manuscript : Kaoru Hashizume, Atsushi Ochiai. Review and final approval : All authors.
K.Y., M.K., K.H., and N.H. are employees of Chugai Pharmaceutical. This study was financially supported by Chugai Pharmaceutical Co., Ltd. The other authors declare no conflicts of interest.
Supporting information. Click here for additional data file. Supporting information. Click here for additional data file. Supporting information. Click here for additional data file. Supporting information. Click here for additional data file. Supporting information. Click here for additional data file. Supporting information. Click here for additional data file.
|
The Evolving Role of the Rheumatology Practitioner in the Care of Immunocompromised Patients in the
|
67c90459-4d01-4830-af94-d6b92a44fa26
|
10100242
|
Internal Medicine[mh]
|
The impact of the global COVID‐19 pandemic on the field of rheumatology has been dramatic and broad‐ranging. These effects include the pandemic's ongoing influence on models of care delivery, the intermittent impact on drug availability to our patients, and the output of research our field has contributed to increasing our understanding of the disease's epidemiology, basic and clinical immunology, clinical outcomes, and vaccinology in immunocompromised hosts. The next phase of the pandemic cannot be totally predicted, but there is broad agreement that SARS–CoV‐2 as a global pathogen is unlikely to disappear quietly; the virus appears to be becoming endemic, and it is likely we will face continued emerging variants of unpredictable pathogenicity. If this prediction comes to fruition, SARS–CoV‐2 infections will likely impact the population along 2 different paths. The first and most common scenario will be new or recurrent infection among healthy, previously exposed, or vaccinated individuals whose disease course will, in the vast majority of cases, be of lesser severity and low mortality. The second more troubling scenario will occur with infections in our immunocompromised patients who, even if vaccinated, are more likely to experience severe outcomes from the disease ( , ). We, as rheumatologists, must prepare for the latter scenario even though it is yet to be determined what will constitute best practice models for both prevention and care for our immunocompromised patients. There is clearly no one‐size‐fits‐all approach, as rheumatologists practice in many different models of care, ranging from solo practices to small‐ and large‐group practices, multispecialty, and hospital‐based practices; in each of these settings, practitioners have access to varying levels of resources. The goals, however, are the same for all of us: protecting our most vulnerable patients from infection wherever possible and contributing to providing or directing those who become infected to the best possible care. While these goals seem unassailable, they raise important questions about the boundaries of rheumatologic care. These questions are similar, in some ways, to the controversies surrounding our role in the care of other non‐rheumatologic problems: cardiovascular risk management, diagnosis and treatment of infections, providing vaccinations, and management of other medical problems in patients with complex conditions. One could easily ask whether COVID‐19 should be considered different from any other medical problem we may either treat ourselves or refer to others. We believe it is different, and accordingly, we would like to start a discussion within the rheumatology profession in this call to action by posing a series of questions and possible answers regarding how these goals may be achieved.
Defining and determining precisely who is immunocompromised is surprisingly difficult and a work continually in progress. Definitions put forth by the Centers for Disease Control and Prevention (CDC) ( ) are inadequate and note that such determination may be best arrived at in consultation with a specialist. Data from numerus studies including the COVID‐19 Global Rheumatology Alliance ( ) and others ( ) have provided insights into epidemiologic factors imparting risk, including comorbidities, disease activity, and the use of certain immunomodulatory drugs, especially rituximab and glucocorticoids. While the utility of measuring serologic response to vaccine has also been discouraged by the CDC ( ) and is not currently recommended by the American College of Rheumatology (ACR) COVID‐19 Vaccine Task Force due to lack of supportive data at the time of last guidance release, other groups such as the transplantation community ( ) have advocated for such testing to identify the most vulnerable among their patients. The status quo of such a nonspecific definition is not acceptable, and we urgently need to develop more quantitative biomarkers, both clinical and laboratory‐based, to identify those among our community who are most at risk and need prioritization of resources.
COVID ‐19? While prevention of infection is clearly the most desirable strategy for limiting the effects of COVID‐19 on our immunocompromised patients, our capacity to achieve this with continued viral evolutionary escape has become challenging, and we have more realistically moved to preventing severe outcomes, including death from primary or breakthrough infection, as our highest priority. The most powerful preventive measure currently available against severe clinical outcomes is vaccination; however, both patients and practitioners are often confused by the rapidly changing guidelines for immunocompromised individuals, as well as the general community, and thus need to be continually educated. Despite strong evidence of benefit from vaccination, additional dosing, and boosting, it is unfortunate that not all of our patients are willing or able to be vaccinated ( ). Furthermore, and implicit in the definition of the immunocompromised state, those most heavily immunocompromised are unlikely to fully respond, both immunologically and clinically, to vaccinations; this includes those receiving rituximab as well as other immunosuppressive therapies ( , ). Providing clear education to our most immunocompromised patients regarding the continued use of some level of nonpharmacologic measures of infection, including masking in public and social distancing when appropriate, is also important ( ). Most important for the most severely immunocompromised patients, however, is preexposure prophylaxis with tixagevimab and cilgavimab, which have been demonstrated to significantly reduce the likelihood and severity of COVID‐19 ( ). Often unclear is exactly who on the health care team should initiate such discussions and referrals, which may lead to lapses in administration to those who would benefit most. Consideration of individual practices and groups should be given to system‐based screening of patient treatment records to identify those at the highest risk, such as those undergoing B cell–depletion therapy, and proactively contact those patients to ensure that preexposure prophylaxis has been offered and encouraged. It is our belief that the rheumatologist should take the lead in preexposure prophylaxis and accordingly must commit to both learning about the therapy and becoming familiar with how and where it can be accessed in their area.
COVID ‐19? Rheumatologists in general will have a limited role in managing patients hospitalized with COVID‐19, but they will be confronted with determining what their role is in diagnosing and managing outpatients. The armamentarium of COVID‐19 outpatient treatments is rapidly growing ( ) and includes monoclonal antibodies (a constantly changing option due to viral escape), which need to be administered within a specific timeline (generally 7–10 days) ( ), as well as both a single parenteral and several oral antiviral agents, which must also be given within a critical time window (i.e., 0–5 days after symptom onset for oral therapies, 0–7 days for parenteral therapy) (12). With this in mind, we believe that rheumatology practitioners must educate our immunocompromised patients on how to be rapidly diagnosed and treated, in the event that they do become infected (Figure ). Such education includes an understanding of the time urgency of available therapies, vigilance for early signs or symptoms of COVID‐19, and encouraging access to rapid testing (preferably at home). Another critical step in optimal COVID‐19 outpatient care is clear awareness of who to call once an immunocompromised patient is diagnosed or strongly suspected of having COVID‐19; such a resource (i.e., physician, advanced practitioner, consultant) can then direct the patient to the appropriate outpatient treatment without delay. Each practice setting, whether solo, small‐ or large‐group, or academic center should have its own network of consultants to refer patients to or commit to take it upon themselves to initiate such therapies. Finally, it is unknown whether our immunocompromised patients will have more severe or more frequent sequelae from COVID‐19 (post‐acute sequelae of COVID‐19, or long COVID), but early research demonstrates that prolonged symptom duration is common in our patients ( ). Given the current lack of firm diagnostic criteria or biomarkers and no evidence‐based therapies, the profession will have to remain engaged in active research to answer these questions; our future role in diagnosis and management for now is far from clear.
Defining our role in this rapidly changing landscape of care will continue to be challenging, and we propose that, at minimum, rheumatologists should maintain declarative knowledge of COVID‐19, allowing them to educate their immunocompromised patients, critically appraise the data on preventative and therapeutic options, and be able to prescribe or at least direct immunocompromised patients to the most current and effective care pathway in the event of infection. Given the explosion of peer‐reviewed publications, pre–peer review publications, and online sources of unfiltered data often referred to as gray literature (e.g., as of July 1, 2022, there are >11 billion citations on COVID‐19), it is more challenging than ever to be confident in our knowledge base regarding the changing practice standards for COVID‐19 prevention and management. As a result, most practitioners must rely on real‐time or living guidelines on disease management from organizations invested in providing such guidance. Examples of such information include the National Institutes of Health, CDC, Infectious Disease Society of America, and the ongoing efforts by the ACR ( ), which summarize and provide the best available evidence, including vaccine recommendations, nonpharmacologic and pharmacologic preventive measures, and outpatient therapeutic options. Rheumatology meetings and educational venues must also step in to address clinical needs surrounding prevention and management for the practitioner in addition to presenting the latest basic and clinical science on the disease. The road ahead unfortunately appears to be a long one, and we as professionals who manage immunocompromised patients have an additional formidable challenge to address through education, prevention, and management (12).
All authors drafted the article, revised it critically for important intellectual content, and approved the final version to be published.
Disclosure Form Click here for additional data file.
|
Challenges in upscaling laboratory studies to ecosystems in soil microbiology research
|
61d8c231-f5e4-43da-bc57-06ab76b60128
|
10100248
|
Microbiology[mh]
|
J.C., Y.Z., Y.K., D.W. and J.E.O. contributed equally to this paper.
The authors declare no conflict of interest.
|
Soil viruses: Understudied agents of soil ecology
|
b96353d9-7e78-4188-9d0d-1230b82454d2
|
10100255
|
Microbiology[mh]
|
Current counts of soil viral abundances have revealed that they are as abundant, or more abundant, than their hosts. Most information about viral numbers in soil has been obtained by counting of bacterial viruses (bacteriophage) that can be identified by microscopy and/or cultivated with their bacterial hosts (Williamson et al., ). Direct microscopic counts of virus‐like particles (VLPs) from different soil types revealed approximately 10 8 –10 10 VLP per gram dry weight of soil (Williamson et al., ), with higher numbers in forest soils when compared to agricultural soils (Williamson et al., ). However, the true number of soil viruses may be even higher than that obtained by microscopy, because many viruses are intracellular and not able to be imaged separately from their hosts. Even free viruses are often difficult to distinguish from the background of soil particles and the subcellular size of many viruses has further impeded their direct visualization in soil. However, DNA viruses range greatly in size from 20 nm to giant viruses that are up to 500 nm in diameter. The size and shape of viruses largely depend on the size of their genomes and protein arrangements that surround the genome. They are normally 20–50 times smaller than bacterial cells (Kuzyakov & Mason‐Jones, ), but some giant viruses recovered from permafrost soil are larger than typical bacteria (Legendre et al., ). The advent of metagenomics issued in a new opportunity to scan different habitats for viral sequences (Edwards & Rohwer, ). Sequencing overcame the limitation with reliance on cultivation and/or microscopy for detection of viruses. With increases in sequencing depth, there was the possibility to get increasingly better coverage of DNA viral sequences. Currently, several complete genomes of novel soil viruses have been obtained from soil metagenomes (Wu, Davison, Nelson, et al., ). Extraction of viral particles prior to metagenome sequencing has been shown to increase recovery of viral populations over bulk sequencing approaches (Santos‐Medellin et al., ). Although we know more about soil DNA viruses based on traditional microscopic and culture‐based analyses, RNA viruses are also abundant in soil. Recent screenings of soil RNA sequences (metatranscriptomes) have revealed a diversity of RNA viruses in different grassland soils (Starr et al., ; Wu, Davison, Gao, et al., ). Many of the detected RNA viruses have bacterial hosts, but several are predicted to have eukaryotic hosts. Although there are too few studies to make sweeping generalizations, there is a trend towards different dominant RNA viruses in different soil habitats. For example, different RNA viruses had higher representation in different grassland soils: M itoviridae from a CA annual grassland soil (Starr et al., ) and Reoviridae in Kansas native prairie soil (Wu, Davison, Gao, et al., ). There are still several remaining research questions to be addressed. These include, but are not limited to the following: What are the hosts of soil viruses? The vast majority have not yet been linked to their hosts. Are the soil viruses that have been detected by sequencing approaches active, inactive or dead? Recently, stable isotopes were used to distinguish active from inactive viruses in a peat soil (Trubl et al., ). This approach shows great promise for application to other soil ecosystems. Another question is whether specific soil bacteriophage lysogenic or lytic and what environmental changes trigger transitions between viral lifestyles?
Studies of soil viral sequences in soil metagenomes have shown that some viral genomes contain auxiliary metabolic genes (AMGs) that are not required for normal viral replication and reproduction. For example, a viral gene that encoded an endomannanase enzyme was detected in permafrost metagenomes and functionally validated (Emerson et al., ). Recently, another AMG that encoded a chitosanase enzyme was not only functionally characterized but also crystalized to obtain the protein structure (Wu et al., ). The protein structure was used to predict the mode of action of the viral chitosanase. Interestingly, the protein was comprised of two domains: one was typical of some endoglucanase enzymes, whereas the other was a novel, loopy domain. The viral chitosanases were phylogenetically distinct from chitosanases in bacteria and fungi. In addition, soil viral chitosanases grouped separately from those in other ecosystems, such as marine systems. The implication of these findings is that soil viruses have the potential to provide metabolic reactions that can complicate those of their hosts. Many other potential AMGs have been detected in soil viral sequences from soil metagenomes. These range in potential function, including nutrient cycling of carbon, nitrogen compounds, lipid and protein metabolism and host metabolism and energy generation (Wu, Davison, Nelson, et al., ). There are several remaining questions to address, including the following: What are the functional roles of different kinds of proteins that are expressed from AMGs carried on soil viruses? Does AMG expression benefit survival of the host under specific environmental conditions?
Several recent studies have shown that soil viruses are influenced by changes in their environment. For example, vOTUs were higher in native prairie compared with conventionally tilled soils (Cornell et al., ). Changes in climate can also result in shifts in viral lytic and lysogenic lifestyles (and climate change; Wu, Davison, Nelson, et al., ). These impacts on soil viruses can have cascading effects on their hosts and environment. For example, transition of a temperate phage to a lytic cycle results in killing of their hosts. Often, the most dominant bacteria are those that are lysed, leaving room for less abundant members of the soil microbiome to grow and take their place. This is the ‘kill the winner’ hypothesis (Våge et al., ). Alternatively, during lysogeny, viruses can replicate together with their hosts—a process that is higher with higher host densities, via the ‘piggyback the winner’ hypothesis (Knowles et al., ). As the soil environment is impacted by land management or climate change, soil viruses are also influencing the ability of their hosts to survive and or adapt. When the hosts are lysed, they release carbon and nutrients into the soil environment that are subsequently consumed by other members of the soil biota. This sidestepping of the soil food web, whereby bacteria are consumed by protists, or other predators, is known as the ‘viral shunt’. Ultimately, the recycling of soil nutrients by heterotrophs can impact soil ecology by their entrapment in microbial bodies. As they die, the resulting necromass can serve to store the soil carbon and may be a valuable carbon sink if entombed in soil nanopores (Kuzyakov & Mason‐Jones, )—particularly if associated with deep rooting perennial grasses that can drive the carbon deeper into the soil. This aspect of soil viral ecology could be important for helping to store carbon in deep soils but needs to be further explored to validate its potential. Another way that soil viruses can contribute to soil ecology is through their expression of AMGs, many of which are predicted to play a role in cycling of carbon and other nutrients. For example, the chitosanase AMG described above expresses a functional chitosanase enzyme. Therefore, it could play a key role in decomposition of chitin that is an abundant carbon polymer in many soils as a product of decomposition of fungal cell walls and insect exoskeletons. In this example, the chitosanase enzyme was predicted to reside on a proteobacterial phage from a forest soil (Wu et al., ). Thus, it is intriguing to hypothesize that the viral chitosanase enzyme (V‐Csn) contributes towards chitin metabolism by its bacterial host to aid in nutrient acquisition by its host. Remaining questions to address include the following: Does the viral shunt aid in soil carbon storage? What other types of AMGs are carried on soil viruses, including those that potentially generate energy and trigger dormancy in their hosts? Do different types of bacteriophage primarily express AMGs during lysogenic or lytic cycles, or both?
In summary, recent explorations of soil viruses are beginning to unveil not only their identities but also their functional roles in the soil environments. However, there remains much to be learned about how different soil viruses interact with their hosts and how different environmental conditions influence their interactions. Today, most of the soil viruses remain uncharacterized except for their sequence similarities to known viruses. However, the majority of soil viruses are novel and not similar to known viruses. In addition, there have been few studies of isolated soil viruses that are interacting with their hosts, other than well‐characterized model viruses that have been easy to cultivate. Because soil viruses are so abundant and so responsive to changes in their environment, the downstream implications on their hosts and on the soil ecosystem can be profound. Therefore, the study of soil viruses and their influence on soil ecology represents a tremendous future research opportunity.
J.K.J wrote the manuscript.
J.K.J. has no conflicts of interest.
|
Post‐mortem 7T MR imaging and neuropathology in middle stage juvenile‐onset Huntington disease: A case report
|
5eefb268-8a7b-40f7-aebe-ee7f388660a5
|
10100344
|
Forensic Medicine[mh]
|
The authors declare to have no conflict of interest with the presented findings.
H.S.B.: conception of the idea, organisation and writing the first draft, processing feedback; S.G.D.: Vonsattel gradation; J.B.: radiological assessment; W.M.C.R.M: dissection and preservation of brain tissue; L.W.: imaging protocol, supervision conception of idea and feedback; S.B.: clinical description, supervision conception of idea and feedback.
The peer review history for this article is available at https://publons.com/publon/10.1111/nan.12858 .
|
Improving mitotic cell counting accuracy and efficiency using
|
1cae3037-e79d-4e23-9b3b-97b8c20b78fb
|
10100421
|
Anatomy[mh]
|
Mitotic score is a key component of breast cancer (BC) grading and is a strong predictor of survival, reflecting the underlying biological behaviour of the disease. However, it is the most time‐consuming component to assess and is also constrained by low interobserver reproducibility. Mitotic count discrepancy is considered a frequent cause of overall grade discordance. The poor reproducibility of mitotic count is mainly attributed to the challenges in detecting mitotically active regions in haematoxylin and eosin (H&E)‐stained slides or the presence of mitotic mimickers such as hyperchromatic nuclei, karyorrhectic or apoptotic cells, , even cells in prophase are usually not considered during routine scoring of mitotic figures. Additionally, the heterogeneity of mitotic activity in different regions, and cell density variations, might all be aggravating factors. , , Histone H3 is one of the five histone proteins that together form the major protein constituents of chromatin in eukaryotic cells. , Antibodies directed against phosphorylated histone H3 (PHH3) are almost exclusively expressed in actively proliferating cells during the M phase and late G2 phase and are not observed during apoptosis. The utility of PHH3 has been evaluated in various tumours, including melanoma, , , , , neuroendocrine tumours, , colorectal and ovarian carcinomas, sarcomas , , , and central nervous system tumours, , , and revealed correlation with outcome. Although staining results of both H&E and PHH3 can be viewed using a conventional bright‐field microscope, H&E alone cannot reflect the presence and distribution of underlying specific antigens, just as PHH3 protein expression alone cannot be evaluated adequately without the context of tissue morphology. The dual‐staining technique proposed in this work enables visualization of morphology and molecular profiling over the same tissue section and can thus improve the overall accuracy, quality, and diagnostic precision. Another advantage of this approach is that computational stain separation can be performed on a dual‐stained image to obtain an H&E and an immunohistochemistry (IHC)‐stained whole‐slide image from the same tissue section thus eliminating the need for image registration from serial sections. Consequently, the proposed scheme can be used for the development of computational pathology‐based biomarker prediction algorithms directly from dual‐stained histopathological images without the need for image registration or correspondence analysis. Combining both H&E and IHC techniques might achieve an optimum method for mitosis detection and counting in BC, and that dual staining of BC tissue sections with PHH3 and H&E will improve the concordance of mitosis counting, and hence the overall grade.
This study was conducted on a cohort of primary invasive BC where pseudonymised patient tissue samples were used. Two full‐face tumour sections 4 μm thick from 97 cases were cut; one was stained with H&E only, and the other was stained with PHH3 counterstained with H&E. The cases were selected to represent different grades of BC. Clinical information and tumour characteristics including patient's age at diagnosis, histological tumour type, grade, tumour size, lymph node status, Nottingham Prognostic Index (NPI), and lymphovascular invasion (LVI) were available. Outcome data were calculated and these included BC‐specific survival (BCSS), defined as the time (in months) from 6 months after the date of primary surgical treatment to the time of death due to BC, and distant metastasis‐free survival (DMFS) defined as the time (in months) from 6 months after surgery until the first event of distant metastasis. Data for oestrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor receptor 2 (HER2), and Ki67 were available as previously published. , , , ER and PR positivity were defined as positive nuclear staining in ≥1% of the invasive tumour cells. The proliferation index was evaluated using Ki‐67 antibody staining and defined as high when ≥14% of tumour cells showed nuclear positivity. Immunoreactivity of HER2 was assessed using Hercep Test guidelines. HER2 positivity was defined as strong positive complete membranous staining in ≥10% of the invasive tumour cells (score 3+). HER2 gene amplification status was assessed in borderline cases (IHC score 2+) using chromogenic in situ hybridisation (CISH), using the HER2 CISH pharmDx kit (Dako, Carpinteria, CA, USA), as previously described. , PHH3 –H&E counterstaining Representative paraffin‐embedded tissue blocks of BC tissue were retrieved and processed using a protocol for the dual H&E and IHC staining; 4‐μm tissue sections were cut onto charged slides, and then placed on a 60°C hotplate for 20 min. After rehydration, slides were submerged in citrate buffer at pH 6.0. Water bath heat‐assisted retrieval for 30 min at 96°C was applied with citrate buffer. Rabbit polyclonal anti PHH3 (Abcam, Cambridge, MA, USA; phospho S10 antibody, ab5176) was diluted at 1:100 in Leica antibody diluent (RE AR9352, Leica, Biosystems, Newcastle upon Tyne, UK) and incubated with the sections for 60 min at room temperature. The DAB (Novolink kit, Leica, Biosystems) working solution was applied. Haematoxylin nuclear stain was applied for a longer period (8 min), to remove nonspecific background staining and to improve contrast, weak acid alcohol was used, and then eosin counterstain was applied (2 min); Figure . Tonsil tissue was used as a positive control. Stained slides were scanned at 40× magnification using a high‐throughput slide scanner (Pannoramic 250 Flash III; 3DHistech, Budapest, Hungary), and the slides were then viewed with case viewer software program (v. 2.2.0.85; 3D‐Histech). Mitotic counts on H&E slides and PHH3–H &E dual‐stained sections We assessed the utility of adding PHH3 to routine H&E in scoring mitosis and grading BC by comparing counting mitosis using this technique with traditional mitoses scoring using H&E only. Interobserver agreement in detecting mitotic figures For assessment of the reproducibility of each staining technique, two sections from each case were utilised, one stained with H&E only and the other was stained with PHH3 and counterstained with H&E. A 3 mm 2 rectangle was drawn, in the exact region in each of the two slides, and mitotic figures within each rectangle were counted: Figure . Mitotic counts using H&E and dual PHH3–H&E immunostaining techniques were independently scored by two certified pathologists to measure the agreement between them. The technique that achieved the highest level of agreement was considered the most reliable one. For each staining technique, the average time required to count mitoses was recorded. Interobserver concordance on hotspot identification To determine the most effective method for revealing the greatest number of mitotic figures (hotspots), we evaluated the agreement of two pathologists in detecting mitotic hotspots in 20 whole‐slide images (WSIs) by having each of them draw a 5‐mm 2 circle in the area with the highest number of mitotic figures using the circle annotation tool in the toolbar. Agreement was reached when these circles overlapped or intersected. Image analysis‐assisted PHH3 indices We assessed the degree of agreement between manual and digital image analysis (DIA) tools (ImageJ, NIH, Bethesda, MD, USA [v1.53f51] and QuPath [v0.3.1; Queen's University Belfast, Belfast, UK] ) in counting mitoses using PHH3–H&E and conventional H&E‐stained slides, in addition to quantifying the number of PHH3‐stained G2 phase‐stained cells using 40 images at 40× magnification. Measurement of accuracy (sensitivity and specificity) of PHH3–H &E IHC staining Using this method, we were able to assess PHH3's diagnostic performance and accuracy in detecting true mitotic figures. The relative ability of PHH3 to distinguish mitotic figures from other cells in the cell cycle was determined by performing the receiver operating characteristic (ROC) curve. ROC curves demonstrate the coordinate variation in sensitivity (shown on the Y ‐axis) and specificity (shown on the X ‐axis) of a test as the threshold for defining test positivity, which varies over the entire range of possible test outcomes. Sensitivity and specificity were calculated as follows: Brown‐stained nuclei with loss of nuclear membrane or the presence of chromosome condensation arranged along a plane or separated were considered true‐positive mitotic figures . Unstained or missed mitotic figures showing the above criteria were considered false‐negative mitotic figures . While intact brown‐stained nuclei or nuclei with smooth membrane and the absence of chromosome condensation were considered false‐positive mitotic figures , or PHH3‐stained G2 phase cells ; Figure . Reassessment of the mitotic score and histological grade based on the mitotic activity index ( MAI ) versus PHH3 The number of mitotic figures stained by PHH3–H&E was compared with those stained with H&E only, both counted in each slide within the same 3 mm 2 areas of highest mitotic activity. The counted mitotic number was converted to a score according to the Nottingham grading system, as follows: mitosis score 1 for less than or equal to 11 mitoses per 3 mm 2 , mitosis score 2 from 12–22 mitoses per 3 mm 2 ; mitosis score 3 for equal to or greater than 23 mitoses 3 mm 2 . These newly scored PHH3‐stained mitotic figures were compared to the mitosis score assessed by the MAI of H&E slides. Statistical analysis All statistical analyses were performed using SPSS v. 26 (IBM, Armonk, NY, USA). The correlations between categorical variables were analysed by the Chi‐square test. The total number of PHH3‐stained mitotic figures was dichotomised based on BCSS using X‐tile bioinformatics software version 3.6.1 (School of Medicine, Yale University, New Haven, CT, USA) into high (≥20 mitoses/3 mm 2 ) and low (<20 mitoses /3 mm 2 ). Differences between the two independent groups were compared by the Mann–Whitney U ‐test. The degree of interobserver agreement was assessed using the intraclass correlation coefficient (ICC) for continuous data. The Kappa statistic was used to assess the concordance between observers for categorical variables. Outcome analysis was assessed using Kaplan–Meier curves and the log‐rank test. The Cox regression model was used for the univariate and multivariate analysis. For all tests, P < 0.05 (two‐tailed) was considered statistically significant.
–H&E counterstaining Representative paraffin‐embedded tissue blocks of BC tissue were retrieved and processed using a protocol for the dual H&E and IHC staining; 4‐μm tissue sections were cut onto charged slides, and then placed on a 60°C hotplate for 20 min. After rehydration, slides were submerged in citrate buffer at pH 6.0. Water bath heat‐assisted retrieval for 30 min at 96°C was applied with citrate buffer. Rabbit polyclonal anti PHH3 (Abcam, Cambridge, MA, USA; phospho S10 antibody, ab5176) was diluted at 1:100 in Leica antibody diluent (RE AR9352, Leica, Biosystems, Newcastle upon Tyne, UK) and incubated with the sections for 60 min at room temperature. The DAB (Novolink kit, Leica, Biosystems) working solution was applied. Haematoxylin nuclear stain was applied for a longer period (8 min), to remove nonspecific background staining and to improve contrast, weak acid alcohol was used, and then eosin counterstain was applied (2 min); Figure . Tonsil tissue was used as a positive control. Stained slides were scanned at 40× magnification using a high‐throughput slide scanner (Pannoramic 250 Flash III; 3DHistech, Budapest, Hungary), and the slides were then viewed with case viewer software program (v. 2.2.0.85; 3D‐Histech).
PHH3–H &E dual‐stained sections We assessed the utility of adding PHH3 to routine H&E in scoring mitosis and grading BC by comparing counting mitosis using this technique with traditional mitoses scoring using H&E only.
For assessment of the reproducibility of each staining technique, two sections from each case were utilised, one stained with H&E only and the other was stained with PHH3 and counterstained with H&E. A 3 mm 2 rectangle was drawn, in the exact region in each of the two slides, and mitotic figures within each rectangle were counted: Figure . Mitotic counts using H&E and dual PHH3–H&E immunostaining techniques were independently scored by two certified pathologists to measure the agreement between them. The technique that achieved the highest level of agreement was considered the most reliable one. For each staining technique, the average time required to count mitoses was recorded.
To determine the most effective method for revealing the greatest number of mitotic figures (hotspots), we evaluated the agreement of two pathologists in detecting mitotic hotspots in 20 whole‐slide images (WSIs) by having each of them draw a 5‐mm 2 circle in the area with the highest number of mitotic figures using the circle annotation tool in the toolbar. Agreement was reached when these circles overlapped or intersected.
PHH3 indices We assessed the degree of agreement between manual and digital image analysis (DIA) tools (ImageJ, NIH, Bethesda, MD, USA [v1.53f51] and QuPath [v0.3.1; Queen's University Belfast, Belfast, UK] ) in counting mitoses using PHH3–H&E and conventional H&E‐stained slides, in addition to quantifying the number of PHH3‐stained G2 phase‐stained cells using 40 images at 40× magnification.
PHH3–H &E IHC staining Using this method, we were able to assess PHH3's diagnostic performance and accuracy in detecting true mitotic figures. The relative ability of PHH3 to distinguish mitotic figures from other cells in the cell cycle was determined by performing the receiver operating characteristic (ROC) curve. ROC curves demonstrate the coordinate variation in sensitivity (shown on the Y ‐axis) and specificity (shown on the X ‐axis) of a test as the threshold for defining test positivity, which varies over the entire range of possible test outcomes. Sensitivity and specificity were calculated as follows: Brown‐stained nuclei with loss of nuclear membrane or the presence of chromosome condensation arranged along a plane or separated were considered true‐positive mitotic figures . Unstained or missed mitotic figures showing the above criteria were considered false‐negative mitotic figures . While intact brown‐stained nuclei or nuclei with smooth membrane and the absence of chromosome condensation were considered false‐positive mitotic figures , or PHH3‐stained G2 phase cells ; Figure .
MAI ) versus PHH3 The number of mitotic figures stained by PHH3–H&E was compared with those stained with H&E only, both counted in each slide within the same 3 mm 2 areas of highest mitotic activity. The counted mitotic number was converted to a score according to the Nottingham grading system, as follows: mitosis score 1 for less than or equal to 11 mitoses per 3 mm 2 , mitosis score 2 from 12–22 mitoses per 3 mm 2 ; mitosis score 3 for equal to or greater than 23 mitoses 3 mm 2 . These newly scored PHH3‐stained mitotic figures were compared to the mitosis score assessed by the MAI of H&E slides.
All statistical analyses were performed using SPSS v. 26 (IBM, Armonk, NY, USA). The correlations between categorical variables were analysed by the Chi‐square test. The total number of PHH3‐stained mitotic figures was dichotomised based on BCSS using X‐tile bioinformatics software version 3.6.1 (School of Medicine, Yale University, New Haven, CT, USA) into high (≥20 mitoses/3 mm 2 ) and low (<20 mitoses /3 mm 2 ). Differences between the two independent groups were compared by the Mann–Whitney U ‐test. The degree of interobserver agreement was assessed using the intraclass correlation coefficient (ICC) for continuous data. The Kappa statistic was used to assess the concordance between observers for categorical variables. Outcome analysis was assessed using Kaplan–Meier curves and the log‐rank test. The Cox regression model was used for the univariate and multivariate analysis. For all tests, P < 0.05 (two‐tailed) was considered statistically significant.
Performance of using PHH3–H &E mitotic count in comparison with H&E‐stained mitotic figures When using H&E stain only, the number of mitoses was significantly underestimated as compared to those identified using the PHH3–H&E staining technique; Figure . Pathologists detected significantly higher mitotic figures using the PHH3–H&E (median ± SD, 20 ± 33) compared with the H&E method (median ± SD, 16 ± 25), P < 0.001. Interobserver variability in detecting mitosis on H&E and with PHH3 – H&E High agreement between pathologists was observed when using PHH3–H&E (ICC = 0.820) in comparison with standard H&E (ICC = 0.514). The concordance between pathologists in identifying mitotic figures was highest when using the dual PHH3–H&E technique and was lowest using H&E‐stained slides only. For both pathologists, the time taken to score mitotic figures stained with H&E only was significantly longer than the scoring time for those mitotic figures stained with PHH3–H&E (median ± SD, 240 ± 108 sec/3 mm 2 for HE only and median ± SD, 120 ± 70 sec/3 mm 2 for PHH3–H&E; P < 0.001); Figure . Interobserver concordance on hotspot identification PHH3‐labelled‐mitotic figures were easily seen and permitted quick identification of hotspots; it highlighted mitotic figures at low power at ease without strain. Agreement between pathologists when using PHH3–H&E ( k = 0.842) was better in comparison with H&E ( k = 0.605). Image analysis assisted PHH3 indices Counting H&E as well as PHH3‐stained mitotic cells was performed using ImageJ and QuPath software and compared with an experienced pathologist eye using digitalised WSIs. For H&E‐stained mitotic figures, a fair agreement between QuPath and ImageJ (ICC = 0.431), between ImageJ and pathologist eye (ICC = 0.337), and between pathologist and QuPath (ICC = 0.405) was observed. For PHH3‐stained mitotic figures, a good agreement between QuPath and ImageJ was observed (ICC = 0.692), between ImageJ and pathologist eye (ICC = 0.706), and between pathologist and QuPath (ICC = 0.824); Figure . Regarding the distinction between PHH3‐stained mitotic cells and G2 cells, a good agreement was observed between QuPath and ImageJ (ICC = 0.643), ImageJ and pathologist eye (ICC = 0.791), and between pathologist and QuPath (ICC = 0.834), in detecting PHH3‐stained G2 cells only. Sensitivity and specificity of PHH3–H &E immunohistochemistry staining in counting mitotic figures PHH3 diagnostic performance using diagnostic testing metrics such as the sensitivity, specificity, and area under the ROC curve (AUC), revealed that AUC was equal to 0.84, suggesting that PHH3 can be used as a good accurate test in detecting mitotic figures; Figure . Reassessment of the mitotic score and histological grade based on the MAI versus PHH3 Using PHH3–H&E, 9 cases of grade 1 were upgraded to grade 2 and 15 cases of grade 2 were upgraded to grade 3 (a total of 24 upgraded cases). None of the cases were downgraded. Associations between PHH3 expression and clinicopathological parameters of BC The associations between the PHH3 expression level and clinicopathological features of the tumours are summarised in Table . PHH3‐positivity was significantly associated with aggressive characteristics, including higher tumour stage ( P = 0.01), tumour size ≥2 cm, high grade, nuclear pleomorphism, few tubule formations, and poor NPI ( P < 0.001). Correlation of PHH3 with MAI and Ki67 A strong positive significant correlation was found between mitotic count per 3 mm 2 and PHH3 score ( r = 0.738, P < 0.001), while a weak positive correlation was observed for the Ki67 score ( r = 0.269, P = 0.01); Table . A weak positive correlation was found between PHH3 and Ki67 ( r = 0.177, P = 0.016). Outcome analysis Univariate survival analysis revealed that patients with a high number of PHH3‐stained mitotic figures with a cutoff of PHH3 mitotic figures >20 per 3 mm 2 , had a significantly shorter BCSS and DMFS (hazard ratio [HR] 9.42, 95% confidence interval [CI] 3.97–22.35; P < 0.001) and (HR 8.53, 95% CI 3.81–19.09; P < 0.001) respectively; Figure . In the nonchemotherapy‐treated cohort, a high number of PHH3‐stained mitotic cells were predictive of a higher risk of death from BC ( P < 0.001), and occurrence of distant metastasis ( P < 0.001). However, such an association was not observed in patients who received chemotherapy. Similarly, in the nonhormonal therapy‐treated cohort, high PHH3 was predictive of a higher risk of death from BC ( P < 0.001), and occurrence of distant metastasis ( P < 0.001). However, such an association was not observed in patients who received hormonal therapy; Figure . In the multivariate Cox regression model including other prognostic covariates (tumour grade and nodal stage), PHH3 was an independent predictor of shorter BCSS (HR 2.568, 95% CI 1.05–6.28; P = 0.039) and worse DMFS (HR 2.87, 95% CI 1.24–6.67; P = 0.014). When PHH3 was added to mitosis and the Ki 67 score, it was an independent predictor of BCSS (HR 5.94, 95% CI 2.37–14.89; P < 0.001), and DMFS (HR 4.63, 95% CI 2.13–10.04; P < 0.001), and when mitosis was replaced with PHH3 in a Cox regression model with other grade components, PHH3 was an independent predictor of survival (HR 5.66, 95% CI 1.92–16.69; P = 0.002), and even showed more significant association with BCSS than mitosis (HR 3.63, 95% CI 1.49–8.86; P = 0.005) and Ki67 ( P = 0.27) in our sample; Table .
PHH3–H &E mitotic count in comparison with H&E‐stained mitotic figures When using H&E stain only, the number of mitoses was significantly underestimated as compared to those identified using the PHH3–H&E staining technique; Figure . Pathologists detected significantly higher mitotic figures using the PHH3–H&E (median ± SD, 20 ± 33) compared with the H&E method (median ± SD, 16 ± 25), P < 0.001.
PHH3 – H&E High agreement between pathologists was observed when using PHH3–H&E (ICC = 0.820) in comparison with standard H&E (ICC = 0.514). The concordance between pathologists in identifying mitotic figures was highest when using the dual PHH3–H&E technique and was lowest using H&E‐stained slides only. For both pathologists, the time taken to score mitotic figures stained with H&E only was significantly longer than the scoring time for those mitotic figures stained with PHH3–H&E (median ± SD, 240 ± 108 sec/3 mm 2 for HE only and median ± SD, 120 ± 70 sec/3 mm 2 for PHH3–H&E; P < 0.001); Figure .
PHH3‐labelled‐mitotic figures were easily seen and permitted quick identification of hotspots; it highlighted mitotic figures at low power at ease without strain. Agreement between pathologists when using PHH3–H&E ( k = 0.842) was better in comparison with H&E ( k = 0.605).
PHH3 indices Counting H&E as well as PHH3‐stained mitotic cells was performed using ImageJ and QuPath software and compared with an experienced pathologist eye using digitalised WSIs. For H&E‐stained mitotic figures, a fair agreement between QuPath and ImageJ (ICC = 0.431), between ImageJ and pathologist eye (ICC = 0.337), and between pathologist and QuPath (ICC = 0.405) was observed. For PHH3‐stained mitotic figures, a good agreement between QuPath and ImageJ was observed (ICC = 0.692), between ImageJ and pathologist eye (ICC = 0.706), and between pathologist and QuPath (ICC = 0.824); Figure . Regarding the distinction between PHH3‐stained mitotic cells and G2 cells, a good agreement was observed between QuPath and ImageJ (ICC = 0.643), ImageJ and pathologist eye (ICC = 0.791), and between pathologist and QuPath (ICC = 0.834), in detecting PHH3‐stained G2 cells only.
PHH3–H &E immunohistochemistry staining in counting mitotic figures PHH3 diagnostic performance using diagnostic testing metrics such as the sensitivity, specificity, and area under the ROC curve (AUC), revealed that AUC was equal to 0.84, suggesting that PHH3 can be used as a good accurate test in detecting mitotic figures; Figure .
MAI versus PHH3 Using PHH3–H&E, 9 cases of grade 1 were upgraded to grade 2 and 15 cases of grade 2 were upgraded to grade 3 (a total of 24 upgraded cases). None of the cases were downgraded.
PHH3 expression and clinicopathological parameters of BC The associations between the PHH3 expression level and clinicopathological features of the tumours are summarised in Table . PHH3‐positivity was significantly associated with aggressive characteristics, including higher tumour stage ( P = 0.01), tumour size ≥2 cm, high grade, nuclear pleomorphism, few tubule formations, and poor NPI ( P < 0.001).
PHH3 with MAI and Ki67 A strong positive significant correlation was found between mitotic count per 3 mm 2 and PHH3 score ( r = 0.738, P < 0.001), while a weak positive correlation was observed for the Ki67 score ( r = 0.269, P = 0.01); Table . A weak positive correlation was found between PHH3 and Ki67 ( r = 0.177, P = 0.016).
Univariate survival analysis revealed that patients with a high number of PHH3‐stained mitotic figures with a cutoff of PHH3 mitotic figures >20 per 3 mm 2 , had a significantly shorter BCSS and DMFS (hazard ratio [HR] 9.42, 95% confidence interval [CI] 3.97–22.35; P < 0.001) and (HR 8.53, 95% CI 3.81–19.09; P < 0.001) respectively; Figure . In the nonchemotherapy‐treated cohort, a high number of PHH3‐stained mitotic cells were predictive of a higher risk of death from BC ( P < 0.001), and occurrence of distant metastasis ( P < 0.001). However, such an association was not observed in patients who received chemotherapy. Similarly, in the nonhormonal therapy‐treated cohort, high PHH3 was predictive of a higher risk of death from BC ( P < 0.001), and occurrence of distant metastasis ( P < 0.001). However, such an association was not observed in patients who received hormonal therapy; Figure . In the multivariate Cox regression model including other prognostic covariates (tumour grade and nodal stage), PHH3 was an independent predictor of shorter BCSS (HR 2.568, 95% CI 1.05–6.28; P = 0.039) and worse DMFS (HR 2.87, 95% CI 1.24–6.67; P = 0.014). When PHH3 was added to mitosis and the Ki 67 score, it was an independent predictor of BCSS (HR 5.94, 95% CI 2.37–14.89; P < 0.001), and DMFS (HR 4.63, 95% CI 2.13–10.04; P < 0.001), and when mitosis was replaced with PHH3 in a Cox regression model with other grade components, PHH3 was an independent predictor of survival (HR 5.66, 95% CI 1.92–16.69; P = 0.002), and even showed more significant association with BCSS than mitosis (HR 3.63, 95% CI 1.49–8.86; P = 0.005) and Ki67 ( P = 0.27) in our sample; Table .
In the UK, it is estimated that over 13 million histopathological cases are examined annually, averaging 65,000 slides per day; the majority of these cases require scoring of mitosis as part of the assessment of the proliferative capacity and for prognostic classification. Nearly 55,920 cases are diagnosed with BC each year, and accurate assessment of mitotic activity in these cases is essential for tumour grading and in predicting the risk of disease progression. BCs are graded based on mitotic count into scores 1, 2, and 3 on standard H&E‐stained slides, which is a relatively subjective and time‐consuming task. , There are many approaches for assessing the proliferation (growth) potential of tumours, including assessing the overall proliferation index (average mitotic score), and mitotic evaluation in randomly selected areas and the highly mitotic areas of the tumour (hotspots). In a previous study carried out by our group, we found that there is a tendency to underestimate mitotic count in randomly selected areas or the whole tumour slide compared to hotspots. The mitotic activity is used to reflect tumour cell division and growth potential. Therefore, the highest mitotically active tumours areas are important to be identified, as these are the most likely to progress and respond to cytotoxic chemotherapeutic agents. Other comparative studies between the methods of assessment of mitotic counts or the proliferation activity of breast cancer showed that evaluation in the highly proliferative pool of the tumour (hotspots) is the best representative indicator for the behaviour of the tumour and is strongly associated with the outcome. In line with this, the current breast cancer guidelines recommend counting mitoses within the hotspots to define the proliferation score and grade of breast cancer. , Accurate histologic grading is required for effective clinical staging and treatment decisions; however, distinguishing mitotic figures in H&E‐stained slides from similar chromatin changes is a subjective process that can be subjected to intra‐ and interobserver variation. PHH3 has the benefit of being relatively mitosis‐specific, detecting cells during their transition from the G2 to M phase. Our study evaluated this subjectivity by assessing the interobserver reproducibility of mitotic count using this new technique of counterstaining PHH3 with H&E among pathologists, we have found that the agreement among pathologists in recognizing mitotic figures was highest when employing the dual PHH3–H&E staining approach. In accordance with other studies, , we have also found that the number of mitoses was dramatically undercounted when H&E stain was used alone as opposed to the PHH3–H&E staining method. Moreover, PHH3 staining within a given tumour was heterogenous and allowed for easy identification of mitotic hotspots; lastly, it was significantly less time‐consuming than counting mitoses on conventional H&E preparations, sparing precious diagnostic time, and efficiently increasing the number of cases diagnosed daily, while improving the quality of diagnosis. The added value of using PHH3–H&E immunostaining is that it allows pathologists to assess the morphologic features of mitosis at the same time, with the tumour histological features increasing the specificity of quantification. We also consider that using H&E in staining along with PHH3, or other diagnostic antibodies, can spare important diagnostic areas that could be lost during sequential sectioning, sparing the valuable tissue biopsies as serial sectioning may cut through the area of interest and may result in the loss of regions necessary for critical diagnosis. This is particularly an issue with smaller core needle biopsies that are of limited size and number. And if we considered the removal of the H&E stains, it does not always leave the target epitopes intact for potential reuse of the slide for selective biomarkers in current existing protocols. For this reason, an innovative method utilizing IHC–H&E on the same slide without destaining can spare the tissue without sequential cutting. Another advantage of using dual‐stained slides is that the rapidly expanding use of WSI and artificial intelligence allowed the use of more objective measurements, including DIA for more accurate and objective grade reporting. And using coloured indices such as DAB would be much easier for identification and quantification than the morphological subjective criteria. Thus, mitotic count based on PHH3 staining appears a robust, easy, and reliable method and could potentially decrease interobserver variability, especially with less experienced pathologists. We also demonstrated that using ImageJ analysis‐assisted techniques was comparable to the human eye in terms of the detection of mitotic figures, and the agreement even improved when these mitotic figures were labelled with PHH3, and the distinction between PHH3‐labelled mitotic figures and G2 phase‐stained cells are possible with good agreement. Using this technique, we were able to test the accuracy of mitosis detection by PHH3, and it showed high accuracy reflected by the sensitivity, specificity, and ROC curve. Despite missing a few mitotic cells, this may be due to IHC‐related technical issues with tissue fixation and antigenic retrieval. We examined the clinical outcome of the patients, and based on our findings we found that PHH3 has the capability for a further contribution to BC grading and classification, and could be especially beneficial for pathologists, and training machine‐learning algorithms. The mitotic count showed a significant positive association with PHH3 score, per 3 mm 2 , whereas the Ki67 score showed only a mild positive correlation. Although Ki67 is a widely used and well‐known proliferation marker in BC, it is not specific for mitosis, but is expressed in all phases of the cell cycle. Many cells that are not committed to cell division (not in the mitosis phase of the cell cycle) will be positive for Ki67. In contrast, PHH3 specifically identifies cells undergoing mitosis; therefore, it would provide a better representation of proliferation activity in BC and can be used in the clinical setting to identify mitoses. PHH3 was an independent predictor of survival when it was added to grade and nodal stage, and it even showed a more significant association with survival than mitosis score, and Ki67 in the multivariate analysis; therefore, the PHH3 score could be more predictive of outcome than mitosis and Ki67. This agrees with other studies, where it has been proposed as a replacement for the Ki67 index in several cancers. , , A higher significant association with the patient outcome with a higher hazard ratio was associated with the PHH3 score than the mitosis scores, which supports the hypothesis that PHH3 could replace the mitosis score in grading and could improve BC behaviour prediction and the grading scheme. A challenge that might face the implementation of PHH3 staining in routine practice is the cost burden on the pathology service, especially in places where healthcare is not extensively subsidised. It would be a trade‐off between precision and expense. Healthcare providers in general and pathologists specifically should supply the best possible service to the patients whenever possible and they should be responsible for the decision and diagnoses made. Another point to mention is that PHH3 staining has the same cost as other routinely assessed IHC markers in BC such as ER and Her2, and can provide prognostic value at a lower cost than existing multigene assays and will refine BC grading when using WSIs, which are associated with lower mitoses detection ability ; it has been shown to be more time‐consuming than counting using conventional microscopes. The selective approach could be a solution where targeted patients can benefit more from PHH3 staining and assessment, especially poorly fixed specimens or in borderline cases between mitosis scores 1 and 2 or 2 and 3, where such scores may affect the overall BC grading and hence patient management. In these instances, it would alleviate cost concerns. Moreover, utilizing PHH3 to refine mitosis counting, requires readjusting the range and the cutoffs used to define mitosis scores in BC, as it was shown that the number of mitotic figures detected using PHH3 is higher than that detected using H&E. This refining would need multicentric validation on a large number of cases and with long follow ‐p data.
Histopathological diagnoses of tumours depend mainly on H&E and IHC staining. These are the gold standards in clinical care. We are developing a new technique that combines both and can be tissue‐ and timesaving, while improving the diagnostic quality. It provides a more sensitive approach for training artificial intelligence IHC prediction models while using the exact same slide. Our results demonstrated a tendency to undergrade BCs based on H&E compared with PHH3, which alters the stage, risk of disease progression, and treatment recommendations. We, therefore, show for the first time the potential of using PHH3 counterstained with H&E for precise routine mitotic scoring in clinical practice. Statement of Ethics This work was approved by the Nottingham Research Ethics Committee 2 under the title Development of molecular genetic classification of breast cancer, and obtained ethics approval by the Northwest—Greater Manchester Central Research Ethics Committee under the title; Nottingham Health Science Biobank, reference number 15/NW/0685.
This work was approved by the Nottingham Research Ethics Committee 2 under the title Development of molecular genetic classification of breast cancer, and obtained ethics approval by the Northwest—Greater Manchester Central Research Ethics Committee under the title; Nottingham Health Science Biobank, reference number 15/NW/0685.
AI stained and scored all the cases, took the lead in writing the article, data analysis and interpretation, SM helped with double scoring, ER: conceived and planned the study, contributed to data interpretation, made critical revisions, and approved the final version. All authors contributed to writing the article and approved the final version.
The authors declare no conflicts of interest.
|
Advances in Clinical Cardiology 2022: A Summary of Key Clinical Trials
|
492e37a9-a73a-4605-8bdd-9d6d9efb3644
|
10100625
|
Internal Medicine[mh]
|
In 2022, multiple clinical trials with the potential to influence current practice and future guidelines were presented at major international meetings including the American College of Cardiology (ACC), European Association for Percutaneous Cardiovascular Interventions (EuroPCR), European Society of Cardiology (ESC), Transcatheter Cardiovascular Therapeutics (TCT), American Heart Association (AHA), European Heart Rhythm Association (EHRA), Society for Cardiovascular Angiography and Interventions (SCAI), TVT-The Heart Summit (TVT) and Cardiovascular Research Technologies (CRT). In this article, we review key studies across the spectrum of cardiovascular subspecialties including acute coronary syndromes (ACS), interventional and structural, electrophysiology and atrial fibrillation, heart failure and preventative cardiology.
The results of clinical trials presented at major international cardiology meetings in 2022 were reviewed. In addition to this, a literature search of PubMed, Medline, Cochrane library and Embase was completed, including the terms “acute coronary syndrome”, “atrial fibrillation”, “coronary prevention”, “electrophysiology”, “heart failure” and “interventional cardiology”. Trials were selected based on their relevance to the cardiology community and the potential to change future clinical guidelines or guide further phase 3 research. This article is based on previously completed work and does not involve any new studies of human or animal subjects performed by any of the authors. Advances in Percutaneous Coronary Intervention Several practice changing trials in Percutaneous Coronary Intervention (PCI) have been published this year (Table ). Historically, PCI has been used to treat ischaemic cardiomyopathy, despite limited supporting evidence . In the REVascularisation for Ischaemic VEntricular Dysfunction (REVIVED-BCIS2) trial , 700 patients with left ventricular ejection fraction (LVEF) ≤ 35% and extensive coronary artery disease (CAD), as defined by the British Cardiovascular Intervention Society (BCIS) jeopardy score, were randomised to PCI or optimal medical therapy (OMT). Over a median follow-up time of 3.4 years, PCI versus OMT alone did not result reduction in the primary composite outcome of death or hospitalization for heart failure [37.2% vs. 38.0%; HR 0.99; 95% confidence interval (CI), 0.78–1.27; P = 0.96] . The optimal treatment for left main (LM) and multivessel CAD remains hotly debated. New observational data from the Swedish Coronary Angiography and Angioplasty Registry (SCAAR) compared outcomes among 10,254 such patients undergoing PCI (52.6%) versus coronary artery bypass grafting (CABG) (47.4%). PCI was associated with a 59% increased risk of death versus CABG after 7 years of follow-up ( P = 0.011). Despite the limitations of observational data, findings are in keeping with the NOBLE study , supporting use of CABG where clinically appropriate in LM patients with additional multivessel CAD. In contrast, a meta-analysis of 2913 patients from four RCTs (SYNTAXES, PRECOMBAT, LE MANS, and MASS II) undergoing PCI versus CABG for LM or multivessel CAD did not report any significant difference in 10-year survival (RR 1.05; 95% CI 0.86–1.28), nor significant difference in the subgroup with LM disease alone or multivessel disease alone. This may reflect a lower extent of non-LM disease complexity in the four trials. Of note, a new analysis from the SYNergy Between PCI With TAXUS and Cardiac Surgery Extended Study (SYNTAXES) evaluated mortality according to presence or absence of bifurcation lesions . In the PCI group, those undergoing stenting of ≥ 1 bifurcation lesions versus no bifurcation stenting, had a higher risk of death at 10 years (30.1% vs. 19.8%; P < 0.001). Furthermore, a 2 versus 1 stent bifurcation strategy was associated with a higher risk of death at 10 years (HR 1.51; 95% CI 1.06–2.14). Conversely, in the CABG, the presence or absence of bifurcation lesions had no impact on mortality. As this was a post hoc analysis, results can only be considered hypothesis-generating, but are in keeping with previous data highlighting the complexity of bifurcations and the preference for a simple rather than a complex strategy where possible. Female sex has been associated with worse outcomes following PCI related to smaller vessel disease. However, previous LM have been unclear and, given that LM has a larger diameter, more equivalent results. A substudy of the NOBLE trial showed no difference in outcomes for male versus female, with both showing an excess of major adverse cardiovascular and cerebrovascular events (MACCE) with PCI at 5 years, although no difference in all-cause mortality. For those undergoing PCI for LM disease, the IDEAL-LM (Individualizing Dual Antiplatelet Therapy After Percutaneous Coronary Intervention in patients with left main stem disease) study reported that a strategy of short 4-month DAPT (dual-antiplatelet therapy) plus a biodegradable polymer platinum-chromium everolimus-eluting stent was non-inferior to a strategy of conventional 12-month DAPT plus durable polymer cobalt-chromium everolimus-eluting stent (DP-CoCr-EES), with respect to a composite of death, MI or target vessel revascularisation at 2 years. However, the shorter DAPT strategy did not show any reduction in bleeding events. The Complete Revascularization with Multivessel PCI for Myocardial Infarction (COMPLETE) trial previously reported that complete versus culprit-only PCI had lower risk of cardiovascular (CV) death/myocardial infarction (MI) over 3 years of follow-up. In a new pre-specified analysis , complete versus culprit-only PCI was associated with a greater absence of residual angina (87.5% vs. 84.3%; P = 0.013) and improved quality of life, as assessed via the 19-item Seattle Angina Questionnaire, including reduced physical limitation. Improving PCI outcomes in patients with diabetes remains a focus of several trials. The Second-generation drUg-elutinG Stents in diAbetes: a Randomized Trial (SUGAR trial), which randomised 1175 patients with diabetes and CAD to an amphilimus-eluting stent (Cre8 EVO) vs. conventional Resolute Onyx stent, previously reported that the Cre8 stent met non-inferiority and was associated with a possible 35% reduction in Target Lesion Failure (TLF) at 12 months . However, by 2 years , the difference in TLF was no longer significant (10.4% vs. 12.1%; HR 0.84; 95% CI 0.60–1.19) with numerical but non-significant differences in the individual components of cardiac death (3.1% vs. 3.4%), target vessel MI (6.6% vs. 7.6%), and target lesion revascularization (4.3% vs. 4.6%). While these 2-year results were disappointing, we await results of further studies of new stents in this clinical setting, including the ABILITY trial (NCT04236609) comparing an Abluminus DES + sirolimus-eluting stent system versus Xience. Quantitative flow ratio (QFR), an angiography-based approach to estimate the fractional flow reserve, previously reported superiority versus conventional angiography guidance at 1 year in the FAVOR III (Comparison of Quantitative Flow Ratio Guided and Angiography-Guided Percutaneous InterVention in Patients With cORonary Artery Disease) trial . New data report that the benefit with the QFR-guided strategy was sustained at 2 years, associated with a 34% reduction in the composite of death, MI or ischaemia-driven revascularization [8.5% vs. 12.5%; HR 0.66 (95% CI 0.54–0.81)] . The degree of outcome improvement was greatest amongst those patients in whom the pre-planned PCI strategy was modified by QFR. Current ESC guidelines give post-PCI surveillance with stress testing with a Class IIb recommendation. The POST-PCI (Routine Functional Testing or Standard Care in High-Risk Patients after PCI) trial randomised 1706 patients at 1 year after PCI to routine functional testing (nuclear stress testing, exercise electrocardiography, or stress echocardiography) versus standard care . Use of routine functional testing failed to show any reduction in the primary outcome of death MI, or hospitalization for unstable angina at 2 years (5.5% vs. 6.0%; HR, 0.90; 95% CI 0.61–1.35; P = 0.62), supporting standard care in these patients. Procedural time in graft-angiography studies may be much longer than a non-graft cases. The Randomised Controlled Trial to Assess Whether Computed Tomography Cardiac Angiography Can Improve Invasive Coronary Angiography in Bypass Surgery Patients (BYPASS CTCA), randomised 688 prior CABG patients to CTCA prior to coronary angiography versus standard care. Those who underwent prior CTCA had a shorter procedure duration (mean 17.4 vs. 39.5 min; OR − 22.12; 95% CI − 24.68 to − 19.56), less contrast during the invasive angiogram (mean 77.4 vs. 173 mls), less contrast-induced nephropathy (3.2% vs. 27.9%; P < 0.0001) and 40% greater patient satisfaction . BYPASS CTCA thus supports consideration of prior CTCA particularly with more complex or uncertain graft location or patients at greater renal risk. The 2018 ESC guidelines recommend radial access for PCI unless overriding procedural considerations. A new patient-level meta-analysis of 7 trials, incorporating 21,700 patients reported that, at 30 days, transradial versus transfemoral access was associated with a 23% reduction in all-cause mortality (1.6% vs. 2.1%; P = 0 .012) and 45% reduction in major bleeding (1.5% vs. 2.7%; P < 0.001) . However, transradial access is not without complications, the commonest of which is radial artery occlusion. In the RIVARAD (Prevention of Radial Artery Occlusion With Rivaroxaban After Transradial Coronary Procedures) trial, 538 patients were randomised following coronary angiography to rivaroxaban 10 mg once daily for 7 days versus standard care (no rivaroxaban) . At 30 days, use of rivaroxaban was associated with a 50% reduction in radial artery occlusion as defined by ultrasound (6.9% vs. 13.0%; OR 0.50; 95% CI 0.27–0.91). Bleeding Academic Research Consortium (BARC)-defined bleeding events were numerically but not significantly higher in the rivaroxaban group (2.7% vs. 1.9%; OR 1.4; 95% CI 0.4–4.5). To assess whether distal radial artery puncture might reduce occlusion rates, the Distal Versus Conventional Radial Access DISCO-RADIAL) trial randomised 1,307 patients to distal versus conventional radial access . Distal access was associated with shorter median hemostasis time (153 vs. 180; P < 0.001), but radial artery spasm was more common (5.4% vs. 2.7%; P = 0.015), crossover rates were higher (7.4% vs. 3.5%; P = 0.002) and no difference in the primary endpoint of occlusion on vascular ultrasound was noted at discharge (0.31% vs. 0.91%; P = 0.29). While radial access now considered preferable, transfemoral access is still required in certain cases. As transfemoral operator skills may potentially decline through reduction in volume or lack of experience, ultrasound-guided access techniques are increasingly being used. The UNIVERSAL (Routine Ultrasound Guidance for Vascular Access for Cardiac Procedures) trial randomised 621 patients to femoral access with ultrasound guidance and fluoroscopy versus fluoroscopy alone . Interestingly, and in contrast with previous trials, ultrasound guidance was not associated with significant reduction in the composite of BARC 2, 3, and 5 bleeding or major vascular complication at 30 days (12.9% vs. 16.1%; p = 0.25). The strategy of multi-arterial CABG is endorsed by surgical guidelines but takes longer, is more technically demanding and can be associated with increased complications, such as deep sternal wound infections. An observational single-centre study by Momin et al. of 2979 patients undergoing isolated CABG (from 1999 to 2020) reported those receiving total arterial revascularization had the longest mean survival (18.7 years) versus single internal mammary artery (SIMA) plus vein grafts 16.1 years; P < 0.00001) versus vein grafts only (10.4 years; P < 0.00001). Interestingly, survival with total arterial revascularization was not significantly different to SIMA plus radial artery ± vein grafting (18.60 years). This study supports the durability of arterial grafting, although conclusions are limited by its non-randomised design. Conversely, Saadat et al. stratified 241,548 patients from the Society of Thoracic Surgeons (STS) database undergoing isolated CABG in 2017 into 3 groups: single arterial (86%), bilateral internal thoracic artery-multi-arterial (BITA-MABG; 5.6%), and radial artery multiarterial (RA-MABG; 8.5%). After risk adjustment, the observed to expected event (O/E) ratios showed no significant difference in mortality between the three strategies (1.00 vs. 0.98 vs. 0.96) and the risk of deep sternal wound infection was highest in the BITA-MABG group (1.91 vs. 0.90 vs. 0.96). Given the ongoing data uncertainty, results from the prospective randomised ROMA trial are eagerly awaited (NCT03217006). Structural: Aortic Valve Interventions There has been a dramatic expansion in transcatheter aortic valve interventions over the past decade . A recent analysis of US registry data conducted by Sharma et al. reported a near doubling in transcatheter aortic valve replacement (TAVR) volume overall between 2015 and 2021 (44.9% vs. 2021, 88%, P < 0.01), including a 2.7 fold increase in those < 65 years (now similar to surgical aortic valve replacement (SAVR) (47.5% TAVR vs. 52.5% SAVR, P = ns) particularly in younger patients with heart failure (HF) (OR 3.84; 95% CI 3.56–4.13; P < 0.0001), or prior CABG (OR, 3.49; 95% CI, 2.98–4.08; P < 0.001) . These numbers may further increase across all risk categories with the early long-term data from the seminal PARTNER (Placement of AoRTic TraNscathetER Valve Trial) trials awaited. Emerging evidence from trials such as AVATAR (Aortic Valve Replacement Versus Conservative Treatment in Asymptomatic Severe Aortic Stenosis) and RECOVERY (Early Surgery Versus Conventional Treatment in Very Severe Aortic Stenosis) suggests that early intervention for severe aortic stenosis (AS), before patients develop symptoms, may be of benefit . In a pooled analysis of key trials (PARTNER2A, 2B &3) involving 1974 patients (mean age 81 years; 45% women), Généreux et al. evaluated the relationship between cardiac damage at baseline and prognosis in patients with severe symptomatic AS who underwent AVR (40% SAVR, 60% TAVI) . Baseline cardiac damage was defined using a 0–4 scoring system (0 = no damage and 4 = biventricular failure). Baseline damage correlating strongly with 2-year mortality (HR 1.51 per higher stage; 95% CI 1.32–1.72) with each increase in stage conferred a 24% increase in mortality ( P = 0.001) (from stage 0 = 2.5% to stage 4 = 28.2%) suggesting a role for earlier intervention. Several ongoing trials, such as EARLY TAVR (Evaluation of TAVR Compared to Surveillance for Patients With Asymptomatic Severe Aortic Stenosis), TAVR UNLOAD (Transcatheter Aortic Valve Replacement to UNload the Left Ventricle in Patients With ADvanced Heart Failure) and PROGRESS (Management of Moderate Aortic Stenosis by Clinical Surveillance or TAVR), aim to answer these questions directly. Valve in valve (VIV) TAVR is being increasing utilised in patients with failed AVR; however, it remains unclear whether these patients do better with or without balloon valve fracture (BVF). In a registry analysis of 2975 patients undergoing VIV-TAVR (with balloon-expandable SAPIEN 3 or SAPIEN 3 Ultra) between December 2020 and March 2022, Garcia et al. reported that BVF versus no BVF led to larger mean valve area (1.6 vs. 1.4 cm2; P < 0.01) and lower mean valve gradient (18.2 vs. 22.0 mm Hg; P < 0.01) but also to higher rates of death or life-threatening bleeding (OR 2.55; 95% CI 1.44–4.50) and vascular complications (OR 2.06; 95% CI 0.95–4.44). However, sub-analysis suggested the increase in mortality was mainly if BVF undertaken before VIV-TAVR (OR 2.90; 95% CI 1.21–6.94), whereas no difference was noted if undertaken after VIV-TAVR. This suggests that VIV-BVF should only be performed once the operator has a new TAVR in place. While designed primarily for AS, conventional TAVR devices have sometimes utilised for the treatment of severe aortic regurgitation (AR). The novel trilogy heart valve system, specifically developed for AR, and was evaluated in 45 patients (mean age 77, 40% female, mean Euroscore 7.1%) with moderate to severe AR by Tamm et al. . The primary endpoint, a reduction in ≥ 1 AR grade, was met in 100% of cases. There were no episodes of stroke, death, or conversion to open surgery, but 9 patients (23%) required permanent pacing. Subclinical leaflet thrombosis (SLT) is a relatively common complication of TAVR; however, the optimal treatment strategies, whether with anticoagulation or antiplatelets, remain contested. The multicentre ADAPT TAVR (Edoxaban vs. DAPT in reducing subclinical leaflet thrombosis and Cerebral Thromboembolism After TAVR) randomised 229 patients (mean age 80.1 years; 41.9% men) undergoing TAVR for symptomatic severe AS, and without other indication for OAC, to edoxaban 60 mg or 30 mg once daily versus DAPT with aspirin and clopidogrel . At 6 months, Edoxaban, by intention to treat analysis, was associated with a trend to reduced SLT as assessed by cardiac CT (9.8% vs. 18.4%; P = 0.076) and, in contrast to prior trials with DOAC post-TAVR, there was no difference in bleeding rates (11.7% vs. 12.7%; P = ns). Interestingly, a secondary per-protocol analysis focusing on patients with high compliance did reach statistical significance (19.1% vs. 9.1%; risk ratio 0.48; 95% CI 0.23–0.99). However, despite the use of serial brain MRI, there was no difference in the presence/number of cerebral lesions and no difference in neurocognitive outcomes including stroke at 6 months. Giustino et al. reported a new secondary analysis from the GALILEO trial (Rivaroxaban-based Antithrombotic Strategy to an Antiplatelet-based Strategy After TAVR to Optimize Clinical Outcomes) which, as described previously , had randomised 1644 patients post-TAVR without an indication for oral anticoagulation (OAC) to rivaroxaban 10 mg plus aspirin versus DAPT with aspirin plus clopidogrel for 90 days, but was stopped early due to higher thromboembolic bleeding and mortality events in the Rivaroxaban group . In the new analysis, thromboembolic events appeared to be associated with higher risk of mortality (HR 8.41; 95% CI 5.10–13.87) versus BARC 3 bleeding (HR 4.34; 95% CI 2.31–8.15). Furthermore, this mortality risk appeared higher than that conferred by known risk factors such as age (adjusted HR 1.04; 95% CI 1.01–1.08) and chronic obstructive pulmonary disease (COPD) (adjusted HR 2.11; 95% CI 1.30–3.41). These findings along with previous data from ALANTIS (AntiThrombotic Strategy After Trans-Aortic Valve Implantation for Aortic Stenosis) and ENVISAGE-TAVI AF (Edoxaban Compared to Standard Care After Heart Valve Replacement Using a Catheter in Patients With Atrial Fibrillation) show how the role of DOACs post-TAVI remains uncertain . However, given the devastating impact of thromboembolic events in this patient group, ongoing research is warranted. The absence of a bleeding signal with DOAC in ADAPT TAVR, in which most received lower dose Edoxaban, suggests that lower dose DOAC for a short duration while the valve is endothelialising may improve the risk/benefit ratio. Another area of current contention is the use of cerebral embolic protection (CEP) to reduce risk of stroke. While current guidance does not mandate use, some operators use in high-risk cases . Kaur et al. conducted a meta-analysis of 1,016 patients (mean age 81.3 years) from several randomised trials (DEFLECT III, MISTRAL-C, CLEAN-TAVI, SENTINEL, and REFLECT I and II) evaluating the TriGuard (Keystone Heart) and Sentinel devices versus standard care. At 30 days, CEP was not associated with a reduction in the primary outcome of all-cause stroke (RR 0.93; 95% CI 0.57–1.53), nor a reduction in mortality. Subsequently, the PROTECTED TAVR (Stroke PROTECTion With SEntinel During Transcatheter Aortic Valve Replacement) trial randomised 300 patients (mean age 72 years, 40% female) to CEP with a Sentinel device versus standard care . Again, no significant difference in primary outcome of stroke at 72 h was noted (2.4% vs. 2.9%, P = 0.30), although numbers were relatively small. The results of BHF PROTECT TAVI (British Heart Foundation Randomised Clinical Trial of Cerebral Embolic Protection in Transcatheter Aortic Valve Implantation) plans to enrol 7000 patients and findings are eagerly awaited. Stuctural: Mitral and Tricuspid Valve interventions The favourable findings in COAPT (Cardiovascular Outcomes Assessment of the MitraClip Percutaneous Therapy for Heart Failure Patients With Functional Mitral Regurgitation [MR) helped lead to device approval . However, it has been suggested the reason COAPT was favourable was the strict eligibility criteria, mandating LVEF ≥ 20% to ≤ 50%, left ventricular end-systolic dimension (LVESD) ≤ 70 mm and failure of aggressive medical therapy . EXPAND (A Contemporary, Prospective Study Evaluating Real-world Experience of Performance and Safety for the Next Generation of MitraClip Devices) was a prospective multicentre registry of 1,041 patients with site-reported MR 3 + /4 + were enrolled and received the MitraClip. A recent analysis compared 125 “COAPT-like” patients meeting COAPT inclusion criteria versus 128 “non-COAPT” patients. At 1 year, COAPT-like patients did not show any difference in the primary outcome of all-cause mortality (22.6% vs. 19.6%, P = 0.37) or heart failure hospitalisation (32.6% vs. 25%, P = 0.08). In keeping with their lower baseline MR severity, more non-COAPT patients achieved reduction in MR to mild or less (≤ 1 +) (97.2% vs. 86.5%), suggesting that Mitraclip may benefit patients beyond the strict COAPT criteria, but prospective randomised data are needed, such as the ongoing EVOLVE-MR (MitraClip for the Treatment of Moderate Functional Mitral Regurgitation). Previous data from CLASP (Edwards PASCAL TrAnScatheter Mitral Valve RePair System Study) and CLASPII have validated the safety and efficacy of the Edwards PASCAL™ transcatheter valve repair system . CLASP IID randomised 180 patients with severe degenerative symptomatic MR not eligible for surgery (mean age 81 years, 67% male, median STS 5.9%) to transcatheter Edge-to-Edge Repair (TEER) with the Pascal device (Edwards Lifesciences) vs. MitraClip (Abbott) device . At 30 days, the Pascal device met criteria for non-inferiority with respect to the composite endpoint of CV death, stroke, MI, renal replacement therapy, severe bleeding and re-intervention (3.4% vs. 4.8%; P for noninferiority < 0.05). Of interest, the proportion of patients with MR ≤ 1 + was durable in the PASCAL group (87.2% discharge vs. 83.7% at 6 months; P = 0.317); whereas MitraClip outcomes showed some loss of efficacy (88.5% discharge vs. 71.2% at 6 months; P = 0.003). Although only interim data, this hints that the Pascal device may have superior durability. ViV-transcatheter mitral valve replacement (ViV-TMRV) may be utilised in very high-risk patients without a surgical option on a case-by-case basis despite paucity of real-world outcome data. Bresica et al. retrospectively compared outcomes of 48 patients with bioprosthetic mitral valve (MV) failure undergoing ViV-TMRV (mean age 65 years, 63% female, mean STS 7.9%) versus 36 patients undergoing re-do MV surgery (mean age 58, 72% female, mean STS 7.1%) . ViV-TMVR was not associated with improvement in 1-year survival (90% vs. 80%, P = 0.33) and was associated with higher average postprocedural gradient (8.9 vs. 5.7 mm Hg; P < 0.001). Thus, ViV-TMRV is a good option for high-risk patients, but in less comorbid patients may not provide as good a long-term benefit as surgery, particularly in those with smaller original surgical valves. Data to come from the ongoing PARTNER 3 (Mitral Valve-in-Valve trial) will be useful to help guide decision-making in such patients. Several seminal trials, such as TRILUMMINATE (Abbott Transcatheter lip Repair System in Patients With Moderate or Greater TR), Triband (TranscatheterRepair of Tricuspid Regurgitation With Edwards Cardioband TR System Post-Market Study) and TRISCEND (Investigation of Safety and Clinical Efficacy After Replacement of Tricuspid Valve With Transcatheter Device), have led to a much greater focus on transcatheter tricuspid interventions . CLASP TR (Edwards PASCAL Transcatheter Valve Repair System Pivotal Clinical Trial), a prospective single-arm multicentre study, evaluated 1-year outcomes of the PASCAL transcatheter valve repair system in 65 patients (mean age 77 ± 9 years, 55% female, mean STS 7.7%) with severe tricuspid regurgitation (TR) . In keeping with the high baseline comorbidity, major adverse event rate was 16.9% ( n = 11) with all-cause mortality 10.8% ( n = 7) and 18.5% ( n = 12) re-admitted with heart failure. Paired analysis demonstrated significant improvements in New-York Heart Association (NYHA) grade ( P < 0.001), KCCQ score ( P < 0.001) and 6-min walk test (6MWT) ( P = 0.014). Importantly, the reduction in TR severity noted at 30 days ( P < 0.001) was maintained at 1 year (100% had ≥ 1 grade reduction and 75% had ≥ 2 grade reduction, P < 0.001). TRICLASP (Transcatheter Repair of Tricuspid Regurgitation With Edwards PASCAL Transcatheter Valve Repair System), a prospective, single-arm multicentre trial, evaluated 30-day outcomes in 67 of 74 patients (mean age 80 years, 58% female, mean STS 9%) undergoing the Pascal Ace transcatheter repair system for severe symptomatic inoperable TR (Fig. ). The primary composite outcome of major adverse events was 3% with 88% achieving ≤ 1 grade reduction in TR vs. baseline; P < 0.001), along with significant improvements in NYHA, KCCQ score, and 6MWT ( P < 0.001). Longer term follow-up data are awaited. TriClip-Bright (An Observational Real-world Study Evaluating Severe Tricuspid Regurgitation Patients Treated With the Abbott TriClip™ Device) study , a multicentre, prospective study reported 30-day outcomes for 300 patients (78 ± 7.6 years) undergoing the Triclip Transcatheter valve repair system (Fig. ). The primary endpoint of procedural success (survival to discharge) was met in 91%. Significant reductions in both NYHA and KCCQ score were noted at ( P < 0.001). The trial is still actively recruiting, with a planned follow-up duration of 1 year. Structural: Catheter Based Left Atrial Appendage and Patent Foramen Ovale Closure While definitive studies to guide patent foramen ovale (PFO) closure practice are still lacking, a multidisciplinary consensus statement by SCAI was published this year recommending closure in patients aged 18–60 with a PFO-associated stroke, platypnoea-orthodeoxia syndrome with no other cause, and systemic embolism with no other cause. Of note in the absence of PFO-associated stroke, the guidance does not recommend PFO closure in transient ischaemic attack, AF with ischaemic stroke, migraine, decompression illness or thrombophilia. Several left atrial appendage closure (LAAC) devices have been approved in recent years with favourable long-term data published last year for the Watchman LAAC device (Boston Scientific). The AMULET IDE trial (Amplatzer Amulet Left Atrial Appendage Occluder Versus Watchman Device for Stroke Prophylaxis) trial randomised patients with non-valvular atrial fibrillation (AF), not suitable as anticoagulation to LAAC with an Amulet device ( n = 934) versus Watchman device ( n = 944). At 3 years, there was no difference in the primary composite endpoint of CV mortality, ischaemic stroke or systemic embolism (11.1% vs. 12.7%, P = 0.31) all-cause mortality (14.6% vs. 17.9%; P = 0.07) or major bleeding (16.1% vs. 14.7%; P = 0.46). Similarly, updated data from the US LAAC registry, comparing the Watchman FLX to its previous iteration, the Watchman 2.5, was published this year by Freeman et al. who reported US LAAC registry outcomes from 54,206 patients (mean age 76 years; 59% men) undergoing LAAC with the new Watchman FLX ( n = 27,103) versus previous Watchman 2.5 ( n = 27,103). In-hospital major adverse events were significantly lower with the new Watchman FLX (1.35% vs. 2.4%, OR 0.57: 95% CI 0.50–0.65) driven by reductions in pericardial effusion requiring intervention (0.42% vs. 1.23%), device embolization (0.02 vs. 0.06%) and major bleeding (1.08% vs. 2.05%). Longer follow-up will help clarify if technical aspects between devices confer long-term clinical outcome advantages. Despite the evolution of device technology for LAAC, key clinical questions, such as anticoagulation strategy, remain. Freeman et al. conducted a US LAAC registry analysis of 31,994 patients who underwent Watchman LAAC between 2016 and 2018. Only 12.2% of patients received the full anticoagulation protocol mandated by clinical trials (Fig. ). In contrast to previous European reports from EWOLUTION (Registry on WATCHMAN Outcomes in Real-Life Utilization), the 45-day adjusted adverse event rate was longer if discharged on warfarin alone (HR 0.692; 95% CI 0.569–0.841) or DOAC alone (HR 0.731; 95% CI 0.574–0.930) versus warfarin plus aspirin, suggesting that further research is needed to guide the optimal antithrombotic strategy post-LAAC. Acute Coronary Syndromes The ISCHAEMIA trial (Initial Invasive or Conservative Strategy for Stable Coronary Disease) was a previously reported that routine invasive therapy versus optimal medical therapy (OMT) in stable patients with moderate ischaemia did not reduce major adverse events (MAE), but the possibility of excess events over longer follow-up was queried. The ISCHAEMIA-EXTEND study (median follow-up 5.7 years) reported that while there was still no difference in all-cause mortality in routine invasive versus medical therapy (12.7% vs. 13.4%, P = 0.74), after 2 years the survival curves for cardiovascular (CV) death started to diverge and by 7 years were significantly lower in the routine invasive group (6.4% vs. 8.6% HR 0.78; 95% CI 0.63, 0.96). Conversely, there was an increase in non-CV death in the routine invasive group (5.5% vs. 4.4%, HR 1.44; 95% CI 1.08–1.91). On balance, this still supports an initial OMT strategy but highlights the utility of understanding anatomy to risk stratify and perhaps identify those patients who will benefit the most from CV risk reduction (Fig. ) . Ten-year follow-up data will prove informative. New onset, stable chest pain remains a substantial burden on healthcare systems. SCOT-HEART (Scottish COmputed Tomography of the HEART Trial) and PROMISE (PROspective Multicenter Imaging Study for Evaluation of Chest Pain) previously reported benefit in early computed tomography coronary angiogram (CTCA) for the evaluation of stable chest pain. FFR-CT may further improve CT diagnosis. PRECISE (Prospective Randomized Trial of the Optimal Evaluation of Cardiac Symptoms and Revascularization) randomised 2103 patients (mean age 58 years, 50% women) with suspected CAD to a risk scoring algorithm (with low-risk patients deferred and high-risk patients undergoing FFR-CT) versus standard care. At a median follow-up of 11.8 months, algorithm-guided use of FFR-CT resulted in markedly lower MACE (4.2% vs. 11.3%; adjusted HR 0.29; 95% CI 0.20–0.41), driven by a lower rate of catheterisation (4.2% vs. 11.3%; adjusted HR 0.29; 95% CI 0.20–0.41). There was no difference in all-cause death. A subsequent cost-effectiveness analysis is ongoing. Despite current advances in ACS detection, prediction of recurrent events remains difficult. Batra et al. assessed the predictive valve of biomarker modelling (with hs-TNT, CRP, DGF-15, cystatin C, NT-proBNP) from 14,221 patients enrolled in PLATO (A Comparison of Ticagrelor and Clopidogrel in Patients With Acute Coronary Syndrome) and TRACER (Trial to Assess the Effects of Vorapaxar (SCH 530,348; MK-5348) in Preventing Heart Attack and Stroke in Participants With Acute Coronary trials. An outcome model termed “ABC-ACS Ischaemia” predicted 1-year risk of CV death/MI with C-indices of 0.71 and 0.72 in the development and validation cohorts, respectively. While encouraging, such models likely need to be integrated with additional individual patient characteristics in improve risk prediction. Optical Coherence Tomography (OCT) has demonstrable utility in assessing plaque morphology and so may be useful in delineating between different aetiologies of ACS. The Tokyo, Kanagawa, Chiba, Shizuoka, and Ibaraki active OCT applications for ACS (TACTICS) registry, evaluated plaque morphology in 702 ACS patients undergoing OCT-guided PCI and reported rupture was the commonest aetiology (59%), followed by plaque erosion (26%), and then calcification (4%) ( Fig. ) . However, at 12 months, calcified nodules conferred the worst outcome with a 32.1% MACE rate compared to 12.4% and 6.2% amongst ruptures or erosions, respectively. Antiplatelet therapy Strategies to shorten DAPT duration post-PCI in high bleeding risk patients continue to be evaluated. Longer-term follow-up at 15 months of the MASTER DAPT (Management of High Bleeding Risk Patients Post-Bioresorbable Polymer Coated Stent Implantation With an Abbreviated Versus Prolonged DAPT Regimen) confirmed initial results , with the incidence of the composite endpoint (death, MI, stroke, major bleeding) remaining non-inferiority for shortened DAPT versus standard care (HR 0.92, 95% CI 0.76–1.12; P = 0.40), but a significantly lower rate of major bleeding in the short DAPT group (HR 0.68, 95% CI 0.56–0.83; P = 0.001). These data, although important, were applied in the context of contemporary stent design such as the biodegradable-polymer sirolimus-eluting Ultimaster stent (Terumo) as used in MASTER DAPT. Effective reversal of antiplatelets could be helpful when active bleeding risk outweighs ischaemic risk, particularly in elderly patients. No formal antiplatelets reversal agents are currently licensed; however, an interesting drug under investigation is Bentracimab—a recombinant IgG1 monoclonal antibody antigen-binding fragment that binds with high affinity to ticagrelor and its active metabolite. Bhatt et al., in a phase IIb trial, randomised 205 patients (mean age 61 years, female 50%) already treated with DAPT for 30 days to Bentracimab ( n = 154) versus placebo ( n = 51). Use of Bentracimab was associated with a significant reduction in the primary endpoint of percentage inhibition of P2Y12 reaction units at 4 h ( P < 0.0001) without any excess of thrombotic events or deaths . Further larger-scale phase III trials are eagerly awaited. In patients with an indication for antiplatelet monotherapy, previous studies have suggested a possible benefit for clopidogrel versus aspirin at least in certain patient subgroups. PANTHER (P2Y12 inhibitor vs. aspirin monotherapy in patients with coronary artery disease) was a meta-analysis of several large, randomised trials totalling 24,325 patients with established coronary artery disease (mean age 64 years, 22% women) which compared P2Y12 inhibition (62% clopidogrel, 38% ticagrelor) versus aspirin . Use of P2Y12 inhibition was associated with a 12% reduction in the primary composite outcome of CV death, MI or stroke at 18 months (5.5% vs. 6.3%; HR 0.88; 95% CI 0.79–0.97) driven by a lower risk of MI (HR 0.77; 95% CI 0.66–0.90), but with no difference in stroke (HR 0.85; 95% CI 0.70–1.02) or bleeding (6.4% vs. 7.2%; HR 0.89; 95% CI 0.81–0.98). While firm conclusions are difficult due to the inclusion of 2 different P2Y12 inhibitors, it suggested P2Y12 inhibitor may be warranted instead of aspirin for long-term secondary prevention in patients with coronary artery disease. Indobufen is a reversible COX inhibitor with similar anti-thrombotic effects to aspirin but less gastrointestinal side effects and potentially lower risk of bleeding . The OPTION (the Efficacy and Safety of Indobufen and Low-dose Aspirin in Different Regimens of Antiplatelet Therapy) trial randomised 4,551 patients (mean age 61 years; 65% male) without acute troponin rise, undergoing PCI with DES to 1 year of DAPT (indobufen 100 mg BD plus clopidogrel 75 mg; n = 2258 vs. aspirin plus clopidogrel 100 mg OD; n = 2293). At 1 year, use of indobufen versus aspirin meet non-inferiority with respect to the primary composite outcome (CV death, MI, stroke, ISR and BARC type 2,3 or 5 bleeding) (4.47% vs. 6.11%; HR 0.73; 95% CI 0.56–0.94; P < 0.001 for noninferiority). The secondary safety endpoint of BARC 2, 3 or 5 bleeding was lower with indobufen (2.97% vs. 4.71%; HR 0.63; 95% CI 0.46–0.85), driven by a reduction in BARC 2 bleeding (1.68% vs. 3.49%; P < 0.001). These intriguing data suggest a potential new treatment option particularly for patients with gastrointestinal bleeding or aspirin allergy. Full dose anticoagulation plus antiplatelet therapy significantly increases bleeding risk but the role of low-dose anticoagulation for vascular prevention continues to be studied. Asundexian is a novel oral activated factor XI inhibitor which may lower thromboembolic events but with lower bleeding risk . In the phase II PACIFIC-AMI trial (Study to Gather Information About the Proper Dosing and Safety of the Oral FXIa Inhibitor BAY 2,433,334 in Patients Following an Acute Heart Attack), 1601 patients (median age 68 years, 23% women) with recent acute MI were randomised to asundexian (10 mg, 20 mg or 50 mg) versus placebo in addition to standard DAPT. At 4 weeks, asundexian was not associated with a significant increase in the pre-specified safety outcome of BARC2 bleeding versus placebo 0.98 (90% CI, 0.71–1.35), although there was a numerical increase in bleeding with higher asundexian doses. Based on this trial, asundexian 50 mg daily is being considered for a phase III cardiovascular outcomes trial in acute MI. Asundexian was also evaluated in the phase IIb PACIFIC-STROKE trial (Study to Gather Information About the Proper Dosing and Safety of the Oral FXIa Inhibitor BAY 2,433,334 in Patients Following an Acute Stroke) which randomised 1808 patients with non-embolic ischaemic stroke to asundexian (10 mg, 20 mg or 50 mg) versus placebo in addition to standard care including antiplatelet therapy . Asundexian (whether by pooled or individual dose analysis) was not associated with reduction in the primary efficacy outcome of ischemic stroke or overt stroke at 6 months, although the primary safety outcome of major significant bleeding was not significantly different [asundexian pooled vs. placebo HR1·57 (90% CI 0·91–2·71)]. It thus remains unclear if asundexian has a useful role in ischaemic stroke. In current PPCI guidelines, Bivalirudin (Class IIa) was replaced by unfractionated heparin (UFH) (Class I) as previous studies reported equipoise in clinical outcomes but more difficult drug administration with Bivalirudin. BRIGHT-4 (Bivalirudin With Prolonged Full Dose Infusion Versus Heparin Alone During Emergency PCI) randomised 6,016 PPCI patients from 63 Chinese centres in open-label fashion to Bivalirudin bolus plus infusion for a median of 3 h versus UFH bolus . Patients underwent predominantly radial PPCI (93%) without any prior thrombolytic, anticoagulant or glycoprotein inhibitor treatment. At 30 days, Bivalirudin was associated with a 31% reduction in the primary outcome of all-cause or BARC 3–5 bleeding (HR 0.69; 95% CI 0.53–0.91, P = 0.007), reduced BARC 3–5 bleeding (HR 0.21; 95% CI 0.08–0.54), reduced all-cause mortality (3.0% vs. 3.6%, P = 0.04), and reduced stent thrombosis (0.4% vs. 1.1%, P = 0.0015). Despite these favourable data, given the inherent difficulties in bivalirudin delivery and moderate increase in cost versus UFH, it is unclear if BRIGHT-4 findings will change practice, although a stronger guideline recommendation would be expected. Tongxinluo (TXL) is a traditional Chinese medicine, approved in China for the treatment of stroke and angina . CTS-AMI (China Tongxinluo Study for Myocardial Protection in Patients With Acute Myocardial Infarction) was a randomised trial of 3755 patients with STEMI undergoing PPCI at 124 Chinese centres to TXL versus placebo (in addition to standard therapy). Use of TXL was associated with a 36% reduction in the primary composite outcome of CV death, revascularisation, MI and stroke at 30 days (3.39% vs. 5.25%; RR 0.64; 95% CI 0.47–0.88) and a 30% reduction in cardiac death (2.97% vs. 4.24%; RR 0.70; 95% CI: 0.50–0.99). While the findings are dramatic, further work is necessary to understand the mechanism of action of this novel drug and further randomised multicentre trials to confirm efficacy. Electrophysiology and Devices Following on from the HIS-Alternative trial (His Pacing Versus Biventricular Pacing in Symptomatic HF With Left Bundle Branch Block) , which reported similar outcomes with His-Bundle CRT (His-CRT) versus conventional biventricular CRT (BiV-CRT), the LBBP-RESYNC (Left Bundle Branch Versus Biventricular Pacing For Cardiac Resynchronization Therapy) trial randomised 40 patients with non-ischaemic cardiomyopathy, LBBB and an indication for resynchronisation to left bundle branch CRT (LBB-CRT) versus standard BiV-CRT pacing . LBB-CRT was associated with a larger improvement in LVEF at 6 months (21.1% vs. 15.6%; P = 0.039, 95% CI 0.3–10.9), greater reduction in LV end systolic volumes and greater reduction NT-proBNP ( Fig. ). Vijayaraman et al. presented a retrospective analysis of 477 patients comparing those who underwent conduction pacing (LBB pacing or His-bundle) versus conventional BiV-CRT. Conduction pacing was associated with a lower incidence of the primary composite of death or heart failure hospitalisation (28.3% vs. 38.4%; P = 0.013), mainly driven by a reduction in HF hospitalisations. Vijayaraman et al. also presented a retrospective analysis of 212 patients with rescue LBB pacing who met indications for CRT but had coronary venous lead failure or were non-responders to BiV-CRT . LBB pacing (successful in 94%) was associated with improvement in LVEF from 29% at baseline to 40% at follow-up ( P < 0.001) ( Fig. ). The MELOS (Multicentre European Left Bundle Branch Area Pacing Outcomes Study) registry evaluated 2533 patients from 14 European centres undergoing transseptal left bundle branch area pacing (LBBAP), 27.5% for heart failure and 72.5% for bradycardia . LB fascicular capture was most common (69.5%) followed by LV septal capture (21.5%) then proximal LBB capture (9%). Overall complication rate was 11.7%, including ventricular trans-septal complications in 8.3%. Overall, these trials collectively support the efficacy and safety of conduction system pacing as a suitable alternative to conventional BiV-CRT, although larger randomised trials are required to formally test superiority. Infections related to cardiac implanted electronic devices (CIEDs) have high mortality and morbidity, and the European heart rhythm association (EHRA) consensus advises prompt extraction . Pokornery et al. analysed a Medicare database of 11,619 patients admitted with a CIED infection of whom only 2,109 (28.2%) had device extraction within 30 days. Device extraction versus no extraction was associated with reduction in 1-year mortality (HR 0.79, 95% CI 0.70–0.81) and early device extraction within 6 days versus no extraction was associated with a 41% reduction in 1-year mortality ( P < 0.001). Subcutaneous ICDs (S-ICDs) have been evaluated in previous trials including PRAETORIAN and UNTOUCHED as an alternative to transvenous systems for patients at risk of lead complications or infections. The ATLAS -ICD (Avoid Transvenous Leads in Appropriate Subjects) trial randomised 593 patients with an indication for ICD to SC-ICD versus transvenous ICD (TV-ICD) implantation . SC-ICD was associated with a 92% reduction in perioperative lead complications at 6 months (0.4% vs. 4.8%; OR 0.08; 95% CI 0.00–0.55), although the composite safety outcome (including the primary outcomes plus device-related infection requiring surgical revision, significant wound hematoma requiring evacuation or interruption of oral anticoagulation, MI, stroke/TIA, or death) was similar (4.4% vs. 5.6%; OR 0.78, 95% CI 0.35–1.75) and inappropriate shocks were non-significantly more common (2.7% vs. 1.7%; HR2.37, 95% CI 0.98–5.77). In heart failure patients, there is contradictory evidence whether defibrillator capability improves prognosis in patients receiving CRT. RESET-CRT (Re-evaluation of Optimal Re-synchronization Therapy in Patients with Chronic Heart Failure) retrospectively compared outcomes in 847 CRT-P versus 2722 CRT-D patients undergoing CRT (of whom 27% had a non-ischaemic aetiology and exclusion criteria included recent ACS, revascularisation, or any indication for secondary prevention ICD). The primary endpoint of all-cause mortality at 2.35 years follow-up (adjusted for age and entropy balance) was non-inferior for CRT-P versus CRT-D (HR 0.99, 95% CI 0.81–1.20), suggesting no mortality benefit with defibrillator capability in this population. Atkas et al. compared propensity matched outcomes of 535 patients with ICD versus 535 patients without ICD from the Empagliflozin arm of the Emperor-Reduced trial . Those with ICD versus no ICD had non-significantly lower mortality (HR 0.74, 95% CI 0.51–1.07, P = 0.114) and sudden cardiac death (HR 0.59, 95% CI 0.31–1.15, P = 0.122). However, despite propensity matching, the results were confounded by differences in medical therapy between groups, with more ICD patients receiving B-blockers and ARNIs but fewer receiving ACE-I/ARBs and MRAs. Ventricular Arrhythmias and SCD The VANISH (Ventricular Tachycardia Ablation versus Escalation of Antiarrhythmic Drugs) trial previously demonstrated superiority with regards to mortality, VT storm and appropriate ICD shocks of catheter ablation versus escalated AAD therapy in patients with previous MI and VT . A new sub-analysis compared shock-treated VT events and appropriate shock burden between the 2 groups. Catheter ablation was associated with a significant reduction in shock-treated VT events (39.07 vs. 64.60 per 100 person-years; HR 0.60; 95% CI 0.38–0.95) and total shock burden (48.35 vs. 78.23; HR 0.61; 95% CI 0.37–0.96). Prediction risk of sudden cardiac death (SCD) after MI has typically guided by LVEF < 35%, but many patients with LVEF < 35% who receive ICD never require it, whereas some with higher LVEF are still at risk of SCD. The additional predictive value of CMRI, in particular core scar size and grey zone size, for the PROFID risk prediction model was investigators in 2,049 patients imaged > 40 days post-MI . In the subgroup without ICD, use of CMRI data versus no CMRI data significantly improved prediction of SCD [area under curve (AUC) of model 0.753 vs. AUC 0.618]. In the subgroup with ICD, addition of CMRI data did not significantly improve prediction of SCD (AUC 0.598 vs. 0.535). This suggests CMRI may be useful to risk stratify post-MI and guide ICD use but further prospective studies are required. The SMART-MI-ICM trial previously reported that, in post-MI patients with EF 35–50%, implantable cardiac monitor (ICM) use versus control was associated with higher rates of arrhythmia detection although the clinical significance was unclear. The BIOGUARD-MI (BIO monitorinG in Patients With Preserved Left ventricUlar Function AfteR Diagnosed Myocardial Infarction ) trial aimed to assess the clinical value of arrhythmia detection on ICM, by randomising 804 patients with NSTEMI/STEMI to ICM versus standard care. Use of ICM was not associated with an overall significant reduction in the primary composite endpoint of CV death or hospitalisation at 2.5 years (HR 0.84, P = 0.21, 95% CI 0.64–1.10), although a reduction was noted in the NSTEMI subgroup (HR 0.69, 95% CI 0.49–0.98). This subgroup observation can only be hypothesis generating but is plausible given the more complex and co-morbid nature of a NSTEMI population. Atrial Fibrillation While smartwatches may improve detection of atrial fibrillation (AF), including asymptomatic AF, previous studies have reported high false positive rates. The mAF-App II trial, which used Huawei smartwatch photoplethysmography, reported data from 2.8 million people in China who downloaded the app . During 4 years follow-up, 12,244 (0.4%) people received a query AF notification, 5,227 attended for clinical evaluation with ECG and 24-h Holter monitoring and, within this group, AF was confirmed in 93.8%. This suggests much better specificity than previous studies, although the notification rate was lower than some studies, reflecting the relatively young population, and clinical data were not available for the 7017 people who received a notification but did not attend for evaluation. Unlike previous Apple, Fitbit and Huawei studies, E-Brave used the Preventicus smartphone app and invited 67,488 policyholders of a German health insurance scheme to participate, of whom 5,551 met inclusion criteria and agreed to enroll (AF naïve, median age 65 years; 31% female; median CHA2DS2-VASc of 3) and were randomised to active AF screening (photoplethysmogram [PPG] for 1 min twice per day for 2 weeks then twice weekly for 6 months, plus 2-week loop recorder if abnormal PPG) versus standard care. At 6 months, those in the active arm had double the rate of AF detection requiring OAC treatment (1.33 vs. 0.63%; OR 2.12; 95% CI 1.19–3.76). After 6 months, those without a new AF diagnosis were invited to cross-over to the opposite study arm, and, after a further 6 months, active screening with the app again doubled the detection and treatment of AF (1.38% vs. 0.51%; OR 2.75; 95% CI 1.42–5.34). Given the widespread availability of smartphones particularly in higher-risk populations, this may be a useful public health intervention, although further prospective studies are required to evaluate clinical outcomes of treating AF detected in this fashion. AF has been widely associated with increased risk of dementia and better control of AF may reduce this risk. Zeitler et al. using the Optum Clinformatics database, evaluated the propensity-matched risk of dementia in 19,088 patients following catheter ablation versus 19,088 patents treated with antiarrhythmic drugs (AAD) for AF . Catheter ablation was associated with a 41% reduction in risk of dementia (HR 0.59; 95% CI 0.51–0.68; P < 0.0001) and a 49% reduction in the secondary endpoint of mortality (HR 0.51, 95% CI 0.46–0.55, P < 0.001), supporting the value of effective AF treatment in this population. The Augustus trial previously reported the benefit of apixaban instead of vitamin-K antagonist (VKA) and ongoing P2Y12i monotherapy rather than DAPT for patients with AF and ACS/PCI . Harskamp et al. undertook a new analysis of 4,386 patients from Augustus to assess if benefits varied depending on baseline HASBLED (≤ 2 vs. ≥ 3) and CHAD 2 S 2 VASc (≤ 2 vs. ≥ 3) scores . Apixaban was associated with lower bleeding versus VKA irrespective of baseline risk [HR: 0.57 (HAS-BLED ≤ 2), HR 0.72 (HAS-BLED ≥ 3); interaction P = 0.23] and lower risk of death or hospitalization (HR 0.92 (CHA 2 DS 2 -VASc ≤ 2); HR 0.82 (CHA 2 DS 2 -VASc ≥ 3); interaction P = 0.53]. Aspirin versus placebo increased bleeding irrespective of baseline risk [HR: 1.86 (HAS-BLED ≤ 2); HR: 1.81 (HAS-BLED ≥ 3); interaction P = 0.88] with no significant difference in death or hospitalization [HR: 1.09 (CHA 2 DS 2 -VASc ≤ 2); HR: 1.07 (CHA 2 DS 2 -VASc ≥ 3); interaction P = 0.90]. The INVICTUS (Investigation of Rheumatic AF Treatment Using Vitamin K Antagonists, Rivaroxaban or Aspirin Studies) trial , randomised 4565 patients with rheumatic mitral valve and at high risk (CHAD 2 S 2 VASc ≥ 2, mitral valve area ≤ 2cm 2 , left atrial spontaneous contrast or thrombus) to Rivaroxaban versus VKA. Rivaroxaban was associated with increased incidence of the primary composite endpoint of stroke, systemic embolus, MI, or death from vascular/unknown cause (560 vs. 446 events; HR 1.25, 95% CI 1.10–1.41) despite suboptimal VKA control (only 33.2% having at appropriate INR enrolment, and the time in therapeutic range (TTR) being only 56–65% during follow-up). Rivaroxaban was also associated with a 37% increased risk of stroke and 23% increased risk. Thus, for AF and rheumatic mitral valve disease, VKA remains preferable to rivaroxaban. Previous studies reported that high-power, short duration (HPSD) versus conventional radiofrequency ablation (RFA) for AF was more effective with similar safety . The POWER FAST III (High Radiofrequency Power for Faster and Safer Pulmonary Vein Ablation) trial randomised 267 patients with AF to HPSD versus conventional RFA . HPSD was associated with a reduced ablation time but no difference in the primary efficacy outcome of freedom of atrial arrhythmia (99.2% vs. 98.4% in right pulmonary veins, 100% vs. 100% in left pulmonary veins) or the primary safety outcome of oesophageal lesions at endoscopy (7.5% vs. 6.5%; P = 0.94). Both conventional RFA and cryoablation for pulmonary vein isolation induce injury to neurocardiac structures (nerves and ganglia) which may be detected may release of S100b levels and post-procedure rise in heart rate . The technique of pulsed field ablation (PFA) may reduce neurocardiac trauma. Lemoine et al. randomised 56 patients to PFA versus cryoablation for AF. In those treated with PFA versus cryoablation, troponin I levels were 3 times higher ( P < 0.01), indicating more myocardial injury, but S100b levels were 2.9 times lower ( P < 0.001), and there was no increase in post-procedural heart rate (vs. marked increase with cryoablation; P < 0.01), indicating less neurocardiac damage with PFA. In addition, procedural success and durability of PFA appears encouraging. Keffer et al. evaluated 41 patients undergoing pulmonary vein PFA . The primary outcome of AF > 30 s or atrial tachycardia after a 30-day blanking period detected on 7-day Holter monitoring at 3 and 6 months occurred in 5 patients, of whom 3 underwent redo ablation during which all pulmonary veins were found to be still isolated. EAST-AFNET 4 previously reported a benefit of early rhythm control versus standard care in patients with AF , but there has been a paucity of data regarding initial ablation in such patients. In PROGRESSIVE-AF (a 3-year follow-up of the EARLY-AF trial), 303 patients with newly diagnosed symptomatic paroxysmal AF were randomised to upfront ablation versus AAD . Ablation was associated with a 75% reduction in the primary outcome of progression to persistent AF/flutter/tachycardia requiring cardioversion (1.9% vs. 7.4%; HR 0.25; 95% CI 0.09–0.70), a 49% reduction in any atrial arrhythmia > 30 s (56.5% vs. 77.2%; HR 0.51; 95% CI 0.38–0.67), a 69% reduction in hospitalisations (5.2% vs. 16.8%; RR 0.31; 95% CI 0.14–0.66) and 53% reduction in adverse effects (11% vs. 23.5%; RR 0.47; 95% CI 0.28–0.79). Use of botulinum toxin A to reduce AF was assessed in the NOVA (NeurOtoxin for the PreVention of Post-Operative Atrial Fibrillation) study which randomised 323 patients undergoing cardiac (bypass and/or valve) surgery to epicardial botulinum toxin A (125 units or 250 units) versus placebo . Overall, botulinum 125 units or 250 units versus placebo was not associated with a reduction in the primary outcome of AF > 30 s at 30 days (RR 0.80; 95% CI 0.58–1.10 and RR 1.04; 95% CI 0.79–1.37), respectively, although in the patient subgroup > 65 years, botulinum 125 units was associated with AF reduction (RR 0.64; 95% CI 0.43–0.94) which may be considered hypothesis-generating and warrant further study. Etripamil is a novel non-dihydropyridine calcium channel blocker, which may be given as a nasal spray, for acute treatment of patients with paroxysmal supraventricular tachycardia (PSVT) or AF. The RAPID (Efficacy and Safety of Etripamil for the Termination of Spontaneous PSVT) study screened 706 patients with PSVT ultimately assigning in random fashion 135 patients to etripamil versus 120 to placebo. Etripamil was associated with more than double the primary outcome of conversion to sinus rhythm within 30 min (64.3% vs. 31.2%; HR 2.62; 95% CI 1.66–4.15) and a median time to conversion of 17 min (almost 3 times quicker than placebo). Heart Failure Previous studies have shown the selective cardiac myosin activator Omecamtiv Mecarbilon may improve CV outcomes in HFrEF patients . To assess functional impact, the METEORIC-HF (Effect of Omecamtiv Mecarbil on Exercise Capacity in Chronic Heart Failure With Reduced Ejection Fraction) trial randomised 276 patients with LVEF ≤ 35%; NYHA II-III (in 2:1 fashion) to Omecamtiv Mecarbilon versus placebo for 20 weeks, in addition to standard therapy. Surprisingly, despite good tolerability and the previous favourable CV outcome data, Omecamtiv Mecarbilon was not found to improve exercise capacity (assessed by peak oxygen uptake on cardiopulmonary exercise stress testing). A major stumbling block in optimising HF medications can be hyperkalaemia. Patiromer, a non-absorbed sodium-free potassium-binding polymer increases faecal potassium excretion. The DIAMOND (Patiromer for the Management of Hyperkalemia in Subjects Receiving RAASi for HFrEF) trial randomised 1642 patients with HFrEF and renin–angiotensin–aldosterone system inhibitor (RAASi)-related hyperkalaemia to Patiromer versus placebo. Over a period of 13–42 (mean 27) weeks, Patiromer was associated with less increase in potassium (adjusted mean change + 0.03 vs. + 0.13 mmol/l; 95% CI –0.13 to 0.07; P < 0.001). The risk of hyperkalamia and need for reduction of MRA dose were numerically (although not statistically) lower. These important findings support Patiromer being incorporated in local HF protocols. Implementation of HF guidelines can be hampered by many factors. PROMPT-HF (PRagmatic trial of Messaging to Providers about Treatment of Heart Failure) randomised 1310 patients with HFrEF, not already taking all four pillars of therapy to a strategy of targeted, tailored electronic healthcare record alerts to optimise guideline-directed medical therapy (GDMT) versus standard care. The electronic alert strategy was associated with a significant increase in the number of drug classes prescribed at 30 days (26% vs. 19%; adjusted RR 1.41; 95% CI: 1.03–1.93; P = 0.03; number needed to alert = 14). In an impressive attempt to improve secondary prevention therapy delivery, the SECURE (Secondary Prevention of Cardiovascular Disease in the Elderly Trial) trial randomised 2499 patients with MI ≤ 6 months to an open label polypill, comprising aspirin 100 mg, ramipril (2.5, 5 or 10 mg) and atorvastatin ( or mg), versus standard care. At 3-year follow-up, use of the polypill was associated with a 24% reduction in the primary endpoint of CV death, type 1 MI or ischaemic stroke (9.5% vs. 12.7%; HR 0.76, 95% CI: 0.6–0.96; P = 0.02). Sodium-glucose cotransporter-2 inhibitors (SGLT2i) trials continue to dominate HF research. A meta-analysis of 13 SGLT2i trials involving 90,413 participants (82 reported a 37% reduction in risk of progressive renal dysfunction 37% (RR 0·63, 95% CI 0·58–0·69) and a 23% reduction in risk of CV death or HF hospitalisation (RR 0·77; 0·74–0·81). Effects were similar in diabetics versus non-diabetics and regardless of baseline renal function (Fig. ). When first introduced and before reno-protective properties became clear, SGLT2i use was restricted to patients with eGFR > 60 to optimise glycaemic control. EMPA-KIDNEY (Study of Heart and Kidney Protection With Empagliflozin) randomised 6609 patients with impaired renal function (eGFR 20 to < 45, or eGFR 45 to < 90 plus urinary albumin-to-creatinine ratio > 200) to empagliflozin versus placebo. At 2 years, empagliflozin was associated with a 28% reduction in the primary endpoint of progression of kidney disease (defined as end-stage kidney disease, eGFR < 10, decrease in eGFR ≥ 40% from baseline, death from renal causes) or CV death (13.1% vs. 16.9% of the control group (HR 0.72; 95% CI: 0.64–0.82; P < 0.001). The EMPULSE (Empagliflozin in Patients Hospitalized for Acute Heart Failure) trial randomised 530 acutely decompensated patients hospitalised with HF, regardless of ejection fraction or diabetic status to Empagliflozin versus placebo. Those with IV vasodilators, IV inotropes, requiring increasing IV diuretic doses, cardiogenic shock or recent ACS were excluded. Empagliflozin versus placebo was more frequently associated clinical benefit in the primary composite endpoint of death, number of HF events, time to first HF event, and change in Kansas City Cardiomyopathy Questionnaire-Total Symptom Score at 90 days (stratified win ratio 1.36; 95% CI 1.09–1.68; P = 0.0054) ( Fig. ). The DELIVER (Dapagliflozin in Heart Failure with Mildly Reduced or Preserved Ejection Fraction) study randomised 6263 hospitalised or recently hospitalised patients with HF and LVEF > 40% to dapagliflozin versus placebo. Dapagliflozin was associated with an 18% reduction in the primary endpoint of death or worsening HF (16.4% vs. 19.5%; HR 0.82, 95% CI 0.73–0.92; P < 0.001). Acetazolamide, a carbonic anhydrase inhibitor, through reduction of proximal tubular sodium reabsorption, may improve the efficiency of loop diuretics, potentially leading to faster decongestion in patients with acute decompensated heart failure. The ADVOR (Acetazolamide in Decompensated Heart Failure with Volume Overload) study randomised 519 patients with decompensated HF patients to IV acetazolamide (500 mg daily) versus placebo in addition to IV loop diuretics (at twice the oral maintenance dose) examining the role. Acetazolamide was associated with a 46% improvement in attaining the primary endpoint of absence of signs of fluid overload at 3 days (42.2% vs. 30.5%; RR 1.46, 95% CI 1.17–1.82; P < 0.001) with higher urine output and natriuresis but without an excess of acute kidney injury, hypokalaemia, or hypotension. While the importance of optimised dosing of HF treatment is well established, since HF therapies may be associated with hypotension and renal decline, the ideal rate of titration is less clear. The STRONG-HF (Safety, Tolerability and Efficacy of Rapid Optimization, Helped by NT-proBNP Testing, of Heart Failure Therapies) trial randomised 1078 patients admitted to hospital with acute HF to rapid up-titration (achieving full recommended doses within 2 weeks of discharge) versus usual care. Rapid up-titration was associated with a significantly lower rate of readmission for HF or all-cause death (15.2% vs. 23.3%; 95% CI 2.9–13.2; P = 0.0021), approximately a 10% increase in adverse events, but a similar rate for serious adverse events. IV iron has a Class IIa recommendation for patients with HF and anaemia. Most trials have used ferric carboxymaltose. IRONMAN (Intravenous ferric derisomaltose in patients with heart failure and iron deficiency in the UK) randomised 1,137 patients with chronic HF and iron deficiency (LVEF < 45%, with Transferrin saturation < 20% or ferritin < 100 µg/l) to ferric derisomaltose (which can be given as a rapid, high-dose infusion) versus usual care. At a median fgollow up of 2.7 years, ferric derisomaltose showed a trend to reduction in the primary composite endpoint of HF hospitalisation and CV death (336 vs. 411 events; RR 0.82, 95% CI 0.66–1.02; P = 0.07) and a significant reduction in HF hospitalisations. Since study outcomes may have been confounded by the COVID-19 pandemic, a pre-specified analysis censoring follow-up on September 30, 2020 was undertaken which reported a significant reduction in the primary endpoint (210 vs. 280 events; RR 0·76 [95% CI 0·58 to 1·00]; P = 0·047). Myosin inhibition using mavacamten in patients with obstructive hypertrophic cardiomyopathy was examined in the VALOR-HCM (Mavacamten in Adults With Symptomatic Obstructive HCM Who Are Eligible for Septal Reduction Therapy) trial which randomised 112 patients eligible for septal reduction therapy (SRT) to mavacamten (starting at 5 mg and titrating using LVEF and LVOT gradient) versus placebo. After 16 weeks follow-up, mavacamten was associated with marked reduction in obstructive parameters with only 17.9% still meeting guideline criteria for SRT (vs. 76.8% of placebo patients; 95% CI: 0.44–0.74; P < 0.001). Prevention Lipoprotein[Lp] (a) is highly genetically determined and higher levels are associated with an increased risk of CV disease. Statins have minimal effect and PCSK9i only modest effect but Olpasiran, a small interfering RNA (siRNA) may enable significant Lp(a) reduction. In the OCEAN(a)-DOSE TIMI 67 trial , 281 patients with elevated Lp(a) > 150 nmol/L were randomised to 1 of 4 olpasiran doses (10 mg, 75 mg, or 225 mg every 12 weeks, or 225 mg every 24 weeks) versus placebo. By 36 weeks, the 4 doses of olpasiran were associated with placebo-adjusted percent reductions in Lp(a) concentration of 70.5%, 97.4%, 101.1%, and 100.5%, respectively, along with useful reductions in low-density lipoprotein (LDL) cholesterol and apolipoprotein B. In addition to Olpasiran, other siRNA drugs are in development including SLN360, and pelacarsen, an mRNA-based antisense oligonucleotide targeting the Lp(a) gene being studied in the 8000-patient outcomes study, Lp(a)HORIZON which will hopefully clarify if reduction of Lp(a) is of benefit . Perceived myalgia remains an important limitation for statin adherence. The Cholesterol Treatment Trialists’ Collaboration evaluated incidence of myalgia in a meta-analysis of 19 double-blind trials of statin versus placebo ( n = 123,940) and four double-blind trials of more versus less intensive statin regimen ( n = 30,724). For the 19 placebo-controlled trials, statin use was associated with a 3% increase in reported muscle pain or weakness at a median 4·3 years follow-up (27.1% vs. 26.6%; RR 1.03, CI 95% 1.01–1.06), but the excess was mainly during the first year, when statin use was associated with an absolute excess of 11 events per 1000 person-years. Similarly, a small increase in reported muscle pain or weakness was seen with higher versus lower intensity statin groups, (36.1% vs. 34.8%; RR 1.05, CI 95% 1.01–1.09). In summary, while statin therapy can cause myalgia, most (> 90%) reports of muscle symptoms by participants allocated statin therapy were not due to the statin. The FOURIER-OLE (Fourier Open-label Extension Study in Subjects With Clinically Evident Cardiovascular Disease in Selected European Countries) evaluated the long-term follow-up of the FOURIER study in 6635 patients randomised to the PCSK9 inhibitor Evolocumab versus placebo. At a median of 5 years, Evolocumab was associated with resulted in a 20% reduction in CV death, MI or stroke (HR 0.8, 95% CI 0.68–0.93; P = 0.003) with low risk of adverse events. Elevated uric acid is recognised as an independent risk factor for CV events. The ALL-HEART (Allopurinol versus usual care in UK patients with ischaemic heart disease) study randomised 5721 patients > 60 years with ischaemic heart disease but no history of gout to allopurinol (up-titrated to maximum of 600 mg) versus placebo. However, over a mean of 4.8 years follow-up, allopurinol was not associated with reduction in the primary endpoint of CV death, MI or stroke (11% vs. 11.3%; P = 0.65). The endothelin pathway has been implicated in the pathogenesis of hypertension, but is currently not targeted therapeutically, leaving this pathway unopposed with currently available drugs. The global PRECISION (Dual endothelin antagonist aprocitentan for resistant hypertension) trial randomised 730 patients with hypertension resistant to at least 3 antihypertensives to the dual endothelin receptor antagonist aprocitentan aprocitentan 12·5 mg or 25 mg versus placebo in a 1:1:1 fashion. At 4 weeks, aprocitentan was associated with met the primary endpoint with greater systolic blood pressure reduction (mean change for aprocitentan 12.5 mg of − 15.3 mmHg and for aprocitentan 25 mg of − 15.2 mmHg vs. placebo − 11.5 mg; P < 0.005 for both treatment doses). Delivering healthcare in rural environments can be challenging. In China, non-physician village doctors may initiate and titrate antihypertensive medications according to a standard protocol with supervision from primary care physicians, and undertake health coaching on home blood pressure monitoring, lifestyle changes, and medication adherence. The China Rural Hypertension Control Project randomised 33,995 patients from 326 villages to village doctor-led multifaceted intervention versus usual care . By 36 months, the intervention group reported a drop in mean systolic pressure from 157 to 126.1 mmHg, whereas the usual-care group only dropped from 155.4 mmHg to 146.7 mmHg and a significant reduction in the primary composite CV endpoint (1.98% vs. 2.85% per year; HR 0.69, CI 95% 0.63–0.76) with 33% fewer strokes ( P < 0.0001), 39% fewer cases of HF ( P = 0.005), 24% fewer CV deaths ( P = 0.0004), and 15% fewer all-cause deaths ( P = 0.009). Previous trial data suggested a protective effect for nocturnal dosing of anti-hypertensive therapies on cardiovascular events, although the trial methodology was subsequently questioned . The TIME (Treatment in Morning versus Evening) trial randomised 21,104 patients (mean age 65 years, female 43%) to evening versus morning dosing of their regular antihypertensive agent . After 5 years, the primary outcome (composite of vascular death, MI or stroke) occurred in 3.4% of the evening dosing group versus 3.7% of the morning group ( P = 0.53). There was no difference in rates of stroke between groups (1.2% vs. 1.3%, P = 0.54); however, there was a modestly higher rate of falls in the morning dosing group (22.2% vs. 21.1%, P = 0.048). This informative trial demonstrates no difference in cardiovascular outcomes with respect to timing of anti-hypertensive dosing albeit a slightly reduced risk of falls with evening dosing.
Several practice changing trials in Percutaneous Coronary Intervention (PCI) have been published this year (Table ). Historically, PCI has been used to treat ischaemic cardiomyopathy, despite limited supporting evidence . In the REVascularisation for Ischaemic VEntricular Dysfunction (REVIVED-BCIS2) trial , 700 patients with left ventricular ejection fraction (LVEF) ≤ 35% and extensive coronary artery disease (CAD), as defined by the British Cardiovascular Intervention Society (BCIS) jeopardy score, were randomised to PCI or optimal medical therapy (OMT). Over a median follow-up time of 3.4 years, PCI versus OMT alone did not result reduction in the primary composite outcome of death or hospitalization for heart failure [37.2% vs. 38.0%; HR 0.99; 95% confidence interval (CI), 0.78–1.27; P = 0.96] . The optimal treatment for left main (LM) and multivessel CAD remains hotly debated. New observational data from the Swedish Coronary Angiography and Angioplasty Registry (SCAAR) compared outcomes among 10,254 such patients undergoing PCI (52.6%) versus coronary artery bypass grafting (CABG) (47.4%). PCI was associated with a 59% increased risk of death versus CABG after 7 years of follow-up ( P = 0.011). Despite the limitations of observational data, findings are in keeping with the NOBLE study , supporting use of CABG where clinically appropriate in LM patients with additional multivessel CAD. In contrast, a meta-analysis of 2913 patients from four RCTs (SYNTAXES, PRECOMBAT, LE MANS, and MASS II) undergoing PCI versus CABG for LM or multivessel CAD did not report any significant difference in 10-year survival (RR 1.05; 95% CI 0.86–1.28), nor significant difference in the subgroup with LM disease alone or multivessel disease alone. This may reflect a lower extent of non-LM disease complexity in the four trials. Of note, a new analysis from the SYNergy Between PCI With TAXUS and Cardiac Surgery Extended Study (SYNTAXES) evaluated mortality according to presence or absence of bifurcation lesions . In the PCI group, those undergoing stenting of ≥ 1 bifurcation lesions versus no bifurcation stenting, had a higher risk of death at 10 years (30.1% vs. 19.8%; P < 0.001). Furthermore, a 2 versus 1 stent bifurcation strategy was associated with a higher risk of death at 10 years (HR 1.51; 95% CI 1.06–2.14). Conversely, in the CABG, the presence or absence of bifurcation lesions had no impact on mortality. As this was a post hoc analysis, results can only be considered hypothesis-generating, but are in keeping with previous data highlighting the complexity of bifurcations and the preference for a simple rather than a complex strategy where possible. Female sex has been associated with worse outcomes following PCI related to smaller vessel disease. However, previous LM have been unclear and, given that LM has a larger diameter, more equivalent results. A substudy of the NOBLE trial showed no difference in outcomes for male versus female, with both showing an excess of major adverse cardiovascular and cerebrovascular events (MACCE) with PCI at 5 years, although no difference in all-cause mortality. For those undergoing PCI for LM disease, the IDEAL-LM (Individualizing Dual Antiplatelet Therapy After Percutaneous Coronary Intervention in patients with left main stem disease) study reported that a strategy of short 4-month DAPT (dual-antiplatelet therapy) plus a biodegradable polymer platinum-chromium everolimus-eluting stent was non-inferior to a strategy of conventional 12-month DAPT plus durable polymer cobalt-chromium everolimus-eluting stent (DP-CoCr-EES), with respect to a composite of death, MI or target vessel revascularisation at 2 years. However, the shorter DAPT strategy did not show any reduction in bleeding events. The Complete Revascularization with Multivessel PCI for Myocardial Infarction (COMPLETE) trial previously reported that complete versus culprit-only PCI had lower risk of cardiovascular (CV) death/myocardial infarction (MI) over 3 years of follow-up. In a new pre-specified analysis , complete versus culprit-only PCI was associated with a greater absence of residual angina (87.5% vs. 84.3%; P = 0.013) and improved quality of life, as assessed via the 19-item Seattle Angina Questionnaire, including reduced physical limitation. Improving PCI outcomes in patients with diabetes remains a focus of several trials. The Second-generation drUg-elutinG Stents in diAbetes: a Randomized Trial (SUGAR trial), which randomised 1175 patients with diabetes and CAD to an amphilimus-eluting stent (Cre8 EVO) vs. conventional Resolute Onyx stent, previously reported that the Cre8 stent met non-inferiority and was associated with a possible 35% reduction in Target Lesion Failure (TLF) at 12 months . However, by 2 years , the difference in TLF was no longer significant (10.4% vs. 12.1%; HR 0.84; 95% CI 0.60–1.19) with numerical but non-significant differences in the individual components of cardiac death (3.1% vs. 3.4%), target vessel MI (6.6% vs. 7.6%), and target lesion revascularization (4.3% vs. 4.6%). While these 2-year results were disappointing, we await results of further studies of new stents in this clinical setting, including the ABILITY trial (NCT04236609) comparing an Abluminus DES + sirolimus-eluting stent system versus Xience. Quantitative flow ratio (QFR), an angiography-based approach to estimate the fractional flow reserve, previously reported superiority versus conventional angiography guidance at 1 year in the FAVOR III (Comparison of Quantitative Flow Ratio Guided and Angiography-Guided Percutaneous InterVention in Patients With cORonary Artery Disease) trial . New data report that the benefit with the QFR-guided strategy was sustained at 2 years, associated with a 34% reduction in the composite of death, MI or ischaemia-driven revascularization [8.5% vs. 12.5%; HR 0.66 (95% CI 0.54–0.81)] . The degree of outcome improvement was greatest amongst those patients in whom the pre-planned PCI strategy was modified by QFR. Current ESC guidelines give post-PCI surveillance with stress testing with a Class IIb recommendation. The POST-PCI (Routine Functional Testing or Standard Care in High-Risk Patients after PCI) trial randomised 1706 patients at 1 year after PCI to routine functional testing (nuclear stress testing, exercise electrocardiography, or stress echocardiography) versus standard care . Use of routine functional testing failed to show any reduction in the primary outcome of death MI, or hospitalization for unstable angina at 2 years (5.5% vs. 6.0%; HR, 0.90; 95% CI 0.61–1.35; P = 0.62), supporting standard care in these patients. Procedural time in graft-angiography studies may be much longer than a non-graft cases. The Randomised Controlled Trial to Assess Whether Computed Tomography Cardiac Angiography Can Improve Invasive Coronary Angiography in Bypass Surgery Patients (BYPASS CTCA), randomised 688 prior CABG patients to CTCA prior to coronary angiography versus standard care. Those who underwent prior CTCA had a shorter procedure duration (mean 17.4 vs. 39.5 min; OR − 22.12; 95% CI − 24.68 to − 19.56), less contrast during the invasive angiogram (mean 77.4 vs. 173 mls), less contrast-induced nephropathy (3.2% vs. 27.9%; P < 0.0001) and 40% greater patient satisfaction . BYPASS CTCA thus supports consideration of prior CTCA particularly with more complex or uncertain graft location or patients at greater renal risk. The 2018 ESC guidelines recommend radial access for PCI unless overriding procedural considerations. A new patient-level meta-analysis of 7 trials, incorporating 21,700 patients reported that, at 30 days, transradial versus transfemoral access was associated with a 23% reduction in all-cause mortality (1.6% vs. 2.1%; P = 0 .012) and 45% reduction in major bleeding (1.5% vs. 2.7%; P < 0.001) . However, transradial access is not without complications, the commonest of which is radial artery occlusion. In the RIVARAD (Prevention of Radial Artery Occlusion With Rivaroxaban After Transradial Coronary Procedures) trial, 538 patients were randomised following coronary angiography to rivaroxaban 10 mg once daily for 7 days versus standard care (no rivaroxaban) . At 30 days, use of rivaroxaban was associated with a 50% reduction in radial artery occlusion as defined by ultrasound (6.9% vs. 13.0%; OR 0.50; 95% CI 0.27–0.91). Bleeding Academic Research Consortium (BARC)-defined bleeding events were numerically but not significantly higher in the rivaroxaban group (2.7% vs. 1.9%; OR 1.4; 95% CI 0.4–4.5). To assess whether distal radial artery puncture might reduce occlusion rates, the Distal Versus Conventional Radial Access DISCO-RADIAL) trial randomised 1,307 patients to distal versus conventional radial access . Distal access was associated with shorter median hemostasis time (153 vs. 180; P < 0.001), but radial artery spasm was more common (5.4% vs. 2.7%; P = 0.015), crossover rates were higher (7.4% vs. 3.5%; P = 0.002) and no difference in the primary endpoint of occlusion on vascular ultrasound was noted at discharge (0.31% vs. 0.91%; P = 0.29). While radial access now considered preferable, transfemoral access is still required in certain cases. As transfemoral operator skills may potentially decline through reduction in volume or lack of experience, ultrasound-guided access techniques are increasingly being used. The UNIVERSAL (Routine Ultrasound Guidance for Vascular Access for Cardiac Procedures) trial randomised 621 patients to femoral access with ultrasound guidance and fluoroscopy versus fluoroscopy alone . Interestingly, and in contrast with previous trials, ultrasound guidance was not associated with significant reduction in the composite of BARC 2, 3, and 5 bleeding or major vascular complication at 30 days (12.9% vs. 16.1%; p = 0.25). The strategy of multi-arterial CABG is endorsed by surgical guidelines but takes longer, is more technically demanding and can be associated with increased complications, such as deep sternal wound infections. An observational single-centre study by Momin et al. of 2979 patients undergoing isolated CABG (from 1999 to 2020) reported those receiving total arterial revascularization had the longest mean survival (18.7 years) versus single internal mammary artery (SIMA) plus vein grafts 16.1 years; P < 0.00001) versus vein grafts only (10.4 years; P < 0.00001). Interestingly, survival with total arterial revascularization was not significantly different to SIMA plus radial artery ± vein grafting (18.60 years). This study supports the durability of arterial grafting, although conclusions are limited by its non-randomised design. Conversely, Saadat et al. stratified 241,548 patients from the Society of Thoracic Surgeons (STS) database undergoing isolated CABG in 2017 into 3 groups: single arterial (86%), bilateral internal thoracic artery-multi-arterial (BITA-MABG; 5.6%), and radial artery multiarterial (RA-MABG; 8.5%). After risk adjustment, the observed to expected event (O/E) ratios showed no significant difference in mortality between the three strategies (1.00 vs. 0.98 vs. 0.96) and the risk of deep sternal wound infection was highest in the BITA-MABG group (1.91 vs. 0.90 vs. 0.96). Given the ongoing data uncertainty, results from the prospective randomised ROMA trial are eagerly awaited (NCT03217006). Structural: Aortic Valve Interventions There has been a dramatic expansion in transcatheter aortic valve interventions over the past decade . A recent analysis of US registry data conducted by Sharma et al. reported a near doubling in transcatheter aortic valve replacement (TAVR) volume overall between 2015 and 2021 (44.9% vs. 2021, 88%, P < 0.01), including a 2.7 fold increase in those < 65 years (now similar to surgical aortic valve replacement (SAVR) (47.5% TAVR vs. 52.5% SAVR, P = ns) particularly in younger patients with heart failure (HF) (OR 3.84; 95% CI 3.56–4.13; P < 0.0001), or prior CABG (OR, 3.49; 95% CI, 2.98–4.08; P < 0.001) . These numbers may further increase across all risk categories with the early long-term data from the seminal PARTNER (Placement of AoRTic TraNscathetER Valve Trial) trials awaited. Emerging evidence from trials such as AVATAR (Aortic Valve Replacement Versus Conservative Treatment in Asymptomatic Severe Aortic Stenosis) and RECOVERY (Early Surgery Versus Conventional Treatment in Very Severe Aortic Stenosis) suggests that early intervention for severe aortic stenosis (AS), before patients develop symptoms, may be of benefit . In a pooled analysis of key trials (PARTNER2A, 2B &3) involving 1974 patients (mean age 81 years; 45% women), Généreux et al. evaluated the relationship between cardiac damage at baseline and prognosis in patients with severe symptomatic AS who underwent AVR (40% SAVR, 60% TAVI) . Baseline cardiac damage was defined using a 0–4 scoring system (0 = no damage and 4 = biventricular failure). Baseline damage correlating strongly with 2-year mortality (HR 1.51 per higher stage; 95% CI 1.32–1.72) with each increase in stage conferred a 24% increase in mortality ( P = 0.001) (from stage 0 = 2.5% to stage 4 = 28.2%) suggesting a role for earlier intervention. Several ongoing trials, such as EARLY TAVR (Evaluation of TAVR Compared to Surveillance for Patients With Asymptomatic Severe Aortic Stenosis), TAVR UNLOAD (Transcatheter Aortic Valve Replacement to UNload the Left Ventricle in Patients With ADvanced Heart Failure) and PROGRESS (Management of Moderate Aortic Stenosis by Clinical Surveillance or TAVR), aim to answer these questions directly. Valve in valve (VIV) TAVR is being increasing utilised in patients with failed AVR; however, it remains unclear whether these patients do better with or without balloon valve fracture (BVF). In a registry analysis of 2975 patients undergoing VIV-TAVR (with balloon-expandable SAPIEN 3 or SAPIEN 3 Ultra) between December 2020 and March 2022, Garcia et al. reported that BVF versus no BVF led to larger mean valve area (1.6 vs. 1.4 cm2; P < 0.01) and lower mean valve gradient (18.2 vs. 22.0 mm Hg; P < 0.01) but also to higher rates of death or life-threatening bleeding (OR 2.55; 95% CI 1.44–4.50) and vascular complications (OR 2.06; 95% CI 0.95–4.44). However, sub-analysis suggested the increase in mortality was mainly if BVF undertaken before VIV-TAVR (OR 2.90; 95% CI 1.21–6.94), whereas no difference was noted if undertaken after VIV-TAVR. This suggests that VIV-BVF should only be performed once the operator has a new TAVR in place. While designed primarily for AS, conventional TAVR devices have sometimes utilised for the treatment of severe aortic regurgitation (AR). The novel trilogy heart valve system, specifically developed for AR, and was evaluated in 45 patients (mean age 77, 40% female, mean Euroscore 7.1%) with moderate to severe AR by Tamm et al. . The primary endpoint, a reduction in ≥ 1 AR grade, was met in 100% of cases. There were no episodes of stroke, death, or conversion to open surgery, but 9 patients (23%) required permanent pacing. Subclinical leaflet thrombosis (SLT) is a relatively common complication of TAVR; however, the optimal treatment strategies, whether with anticoagulation or antiplatelets, remain contested. The multicentre ADAPT TAVR (Edoxaban vs. DAPT in reducing subclinical leaflet thrombosis and Cerebral Thromboembolism After TAVR) randomised 229 patients (mean age 80.1 years; 41.9% men) undergoing TAVR for symptomatic severe AS, and without other indication for OAC, to edoxaban 60 mg or 30 mg once daily versus DAPT with aspirin and clopidogrel . At 6 months, Edoxaban, by intention to treat analysis, was associated with a trend to reduced SLT as assessed by cardiac CT (9.8% vs. 18.4%; P = 0.076) and, in contrast to prior trials with DOAC post-TAVR, there was no difference in bleeding rates (11.7% vs. 12.7%; P = ns). Interestingly, a secondary per-protocol analysis focusing on patients with high compliance did reach statistical significance (19.1% vs. 9.1%; risk ratio 0.48; 95% CI 0.23–0.99). However, despite the use of serial brain MRI, there was no difference in the presence/number of cerebral lesions and no difference in neurocognitive outcomes including stroke at 6 months. Giustino et al. reported a new secondary analysis from the GALILEO trial (Rivaroxaban-based Antithrombotic Strategy to an Antiplatelet-based Strategy After TAVR to Optimize Clinical Outcomes) which, as described previously , had randomised 1644 patients post-TAVR without an indication for oral anticoagulation (OAC) to rivaroxaban 10 mg plus aspirin versus DAPT with aspirin plus clopidogrel for 90 days, but was stopped early due to higher thromboembolic bleeding and mortality events in the Rivaroxaban group . In the new analysis, thromboembolic events appeared to be associated with higher risk of mortality (HR 8.41; 95% CI 5.10–13.87) versus BARC 3 bleeding (HR 4.34; 95% CI 2.31–8.15). Furthermore, this mortality risk appeared higher than that conferred by known risk factors such as age (adjusted HR 1.04; 95% CI 1.01–1.08) and chronic obstructive pulmonary disease (COPD) (adjusted HR 2.11; 95% CI 1.30–3.41). These findings along with previous data from ALANTIS (AntiThrombotic Strategy After Trans-Aortic Valve Implantation for Aortic Stenosis) and ENVISAGE-TAVI AF (Edoxaban Compared to Standard Care After Heart Valve Replacement Using a Catheter in Patients With Atrial Fibrillation) show how the role of DOACs post-TAVI remains uncertain . However, given the devastating impact of thromboembolic events in this patient group, ongoing research is warranted. The absence of a bleeding signal with DOAC in ADAPT TAVR, in which most received lower dose Edoxaban, suggests that lower dose DOAC for a short duration while the valve is endothelialising may improve the risk/benefit ratio. Another area of current contention is the use of cerebral embolic protection (CEP) to reduce risk of stroke. While current guidance does not mandate use, some operators use in high-risk cases . Kaur et al. conducted a meta-analysis of 1,016 patients (mean age 81.3 years) from several randomised trials (DEFLECT III, MISTRAL-C, CLEAN-TAVI, SENTINEL, and REFLECT I and II) evaluating the TriGuard (Keystone Heart) and Sentinel devices versus standard care. At 30 days, CEP was not associated with a reduction in the primary outcome of all-cause stroke (RR 0.93; 95% CI 0.57–1.53), nor a reduction in mortality. Subsequently, the PROTECTED TAVR (Stroke PROTECTion With SEntinel During Transcatheter Aortic Valve Replacement) trial randomised 300 patients (mean age 72 years, 40% female) to CEP with a Sentinel device versus standard care . Again, no significant difference in primary outcome of stroke at 72 h was noted (2.4% vs. 2.9%, P = 0.30), although numbers were relatively small. The results of BHF PROTECT TAVI (British Heart Foundation Randomised Clinical Trial of Cerebral Embolic Protection in Transcatheter Aortic Valve Implantation) plans to enrol 7000 patients and findings are eagerly awaited. Stuctural: Mitral and Tricuspid Valve interventions The favourable findings in COAPT (Cardiovascular Outcomes Assessment of the MitraClip Percutaneous Therapy for Heart Failure Patients With Functional Mitral Regurgitation [MR) helped lead to device approval . However, it has been suggested the reason COAPT was favourable was the strict eligibility criteria, mandating LVEF ≥ 20% to ≤ 50%, left ventricular end-systolic dimension (LVESD) ≤ 70 mm and failure of aggressive medical therapy . EXPAND (A Contemporary, Prospective Study Evaluating Real-world Experience of Performance and Safety for the Next Generation of MitraClip Devices) was a prospective multicentre registry of 1,041 patients with site-reported MR 3 + /4 + were enrolled and received the MitraClip. A recent analysis compared 125 “COAPT-like” patients meeting COAPT inclusion criteria versus 128 “non-COAPT” patients. At 1 year, COAPT-like patients did not show any difference in the primary outcome of all-cause mortality (22.6% vs. 19.6%, P = 0.37) or heart failure hospitalisation (32.6% vs. 25%, P = 0.08). In keeping with their lower baseline MR severity, more non-COAPT patients achieved reduction in MR to mild or less (≤ 1 +) (97.2% vs. 86.5%), suggesting that Mitraclip may benefit patients beyond the strict COAPT criteria, but prospective randomised data are needed, such as the ongoing EVOLVE-MR (MitraClip for the Treatment of Moderate Functional Mitral Regurgitation). Previous data from CLASP (Edwards PASCAL TrAnScatheter Mitral Valve RePair System Study) and CLASPII have validated the safety and efficacy of the Edwards PASCAL™ transcatheter valve repair system . CLASP IID randomised 180 patients with severe degenerative symptomatic MR not eligible for surgery (mean age 81 years, 67% male, median STS 5.9%) to transcatheter Edge-to-Edge Repair (TEER) with the Pascal device (Edwards Lifesciences) vs. MitraClip (Abbott) device . At 30 days, the Pascal device met criteria for non-inferiority with respect to the composite endpoint of CV death, stroke, MI, renal replacement therapy, severe bleeding and re-intervention (3.4% vs. 4.8%; P for noninferiority < 0.05). Of interest, the proportion of patients with MR ≤ 1 + was durable in the PASCAL group (87.2% discharge vs. 83.7% at 6 months; P = 0.317); whereas MitraClip outcomes showed some loss of efficacy (88.5% discharge vs. 71.2% at 6 months; P = 0.003). Although only interim data, this hints that the Pascal device may have superior durability. ViV-transcatheter mitral valve replacement (ViV-TMRV) may be utilised in very high-risk patients without a surgical option on a case-by-case basis despite paucity of real-world outcome data. Bresica et al. retrospectively compared outcomes of 48 patients with bioprosthetic mitral valve (MV) failure undergoing ViV-TMRV (mean age 65 years, 63% female, mean STS 7.9%) versus 36 patients undergoing re-do MV surgery (mean age 58, 72% female, mean STS 7.1%) . ViV-TMVR was not associated with improvement in 1-year survival (90% vs. 80%, P = 0.33) and was associated with higher average postprocedural gradient (8.9 vs. 5.7 mm Hg; P < 0.001). Thus, ViV-TMRV is a good option for high-risk patients, but in less comorbid patients may not provide as good a long-term benefit as surgery, particularly in those with smaller original surgical valves. Data to come from the ongoing PARTNER 3 (Mitral Valve-in-Valve trial) will be useful to help guide decision-making in such patients. Several seminal trials, such as TRILUMMINATE (Abbott Transcatheter lip Repair System in Patients With Moderate or Greater TR), Triband (TranscatheterRepair of Tricuspid Regurgitation With Edwards Cardioband TR System Post-Market Study) and TRISCEND (Investigation of Safety and Clinical Efficacy After Replacement of Tricuspid Valve With Transcatheter Device), have led to a much greater focus on transcatheter tricuspid interventions . CLASP TR (Edwards PASCAL Transcatheter Valve Repair System Pivotal Clinical Trial), a prospective single-arm multicentre study, evaluated 1-year outcomes of the PASCAL transcatheter valve repair system in 65 patients (mean age 77 ± 9 years, 55% female, mean STS 7.7%) with severe tricuspid regurgitation (TR) . In keeping with the high baseline comorbidity, major adverse event rate was 16.9% ( n = 11) with all-cause mortality 10.8% ( n = 7) and 18.5% ( n = 12) re-admitted with heart failure. Paired analysis demonstrated significant improvements in New-York Heart Association (NYHA) grade ( P < 0.001), KCCQ score ( P < 0.001) and 6-min walk test (6MWT) ( P = 0.014). Importantly, the reduction in TR severity noted at 30 days ( P < 0.001) was maintained at 1 year (100% had ≥ 1 grade reduction and 75% had ≥ 2 grade reduction, P < 0.001). TRICLASP (Transcatheter Repair of Tricuspid Regurgitation With Edwards PASCAL Transcatheter Valve Repair System), a prospective, single-arm multicentre trial, evaluated 30-day outcomes in 67 of 74 patients (mean age 80 years, 58% female, mean STS 9%) undergoing the Pascal Ace transcatheter repair system for severe symptomatic inoperable TR (Fig. ). The primary composite outcome of major adverse events was 3% with 88% achieving ≤ 1 grade reduction in TR vs. baseline; P < 0.001), along with significant improvements in NYHA, KCCQ score, and 6MWT ( P < 0.001). Longer term follow-up data are awaited. TriClip-Bright (An Observational Real-world Study Evaluating Severe Tricuspid Regurgitation Patients Treated With the Abbott TriClip™ Device) study , a multicentre, prospective study reported 30-day outcomes for 300 patients (78 ± 7.6 years) undergoing the Triclip Transcatheter valve repair system (Fig. ). The primary endpoint of procedural success (survival to discharge) was met in 91%. Significant reductions in both NYHA and KCCQ score were noted at ( P < 0.001). The trial is still actively recruiting, with a planned follow-up duration of 1 year. Structural: Catheter Based Left Atrial Appendage and Patent Foramen Ovale Closure While definitive studies to guide patent foramen ovale (PFO) closure practice are still lacking, a multidisciplinary consensus statement by SCAI was published this year recommending closure in patients aged 18–60 with a PFO-associated stroke, platypnoea-orthodeoxia syndrome with no other cause, and systemic embolism with no other cause. Of note in the absence of PFO-associated stroke, the guidance does not recommend PFO closure in transient ischaemic attack, AF with ischaemic stroke, migraine, decompression illness or thrombophilia. Several left atrial appendage closure (LAAC) devices have been approved in recent years with favourable long-term data published last year for the Watchman LAAC device (Boston Scientific). The AMULET IDE trial (Amplatzer Amulet Left Atrial Appendage Occluder Versus Watchman Device for Stroke Prophylaxis) trial randomised patients with non-valvular atrial fibrillation (AF), not suitable as anticoagulation to LAAC with an Amulet device ( n = 934) versus Watchman device ( n = 944). At 3 years, there was no difference in the primary composite endpoint of CV mortality, ischaemic stroke or systemic embolism (11.1% vs. 12.7%, P = 0.31) all-cause mortality (14.6% vs. 17.9%; P = 0.07) or major bleeding (16.1% vs. 14.7%; P = 0.46). Similarly, updated data from the US LAAC registry, comparing the Watchman FLX to its previous iteration, the Watchman 2.5, was published this year by Freeman et al. who reported US LAAC registry outcomes from 54,206 patients (mean age 76 years; 59% men) undergoing LAAC with the new Watchman FLX ( n = 27,103) versus previous Watchman 2.5 ( n = 27,103). In-hospital major adverse events were significantly lower with the new Watchman FLX (1.35% vs. 2.4%, OR 0.57: 95% CI 0.50–0.65) driven by reductions in pericardial effusion requiring intervention (0.42% vs. 1.23%), device embolization (0.02 vs. 0.06%) and major bleeding (1.08% vs. 2.05%). Longer follow-up will help clarify if technical aspects between devices confer long-term clinical outcome advantages. Despite the evolution of device technology for LAAC, key clinical questions, such as anticoagulation strategy, remain. Freeman et al. conducted a US LAAC registry analysis of 31,994 patients who underwent Watchman LAAC between 2016 and 2018. Only 12.2% of patients received the full anticoagulation protocol mandated by clinical trials (Fig. ). In contrast to previous European reports from EWOLUTION (Registry on WATCHMAN Outcomes in Real-Life Utilization), the 45-day adjusted adverse event rate was longer if discharged on warfarin alone (HR 0.692; 95% CI 0.569–0.841) or DOAC alone (HR 0.731; 95% CI 0.574–0.930) versus warfarin plus aspirin, suggesting that further research is needed to guide the optimal antithrombotic strategy post-LAAC.
There has been a dramatic expansion in transcatheter aortic valve interventions over the past decade . A recent analysis of US registry data conducted by Sharma et al. reported a near doubling in transcatheter aortic valve replacement (TAVR) volume overall between 2015 and 2021 (44.9% vs. 2021, 88%, P < 0.01), including a 2.7 fold increase in those < 65 years (now similar to surgical aortic valve replacement (SAVR) (47.5% TAVR vs. 52.5% SAVR, P = ns) particularly in younger patients with heart failure (HF) (OR 3.84; 95% CI 3.56–4.13; P < 0.0001), or prior CABG (OR, 3.49; 95% CI, 2.98–4.08; P < 0.001) . These numbers may further increase across all risk categories with the early long-term data from the seminal PARTNER (Placement of AoRTic TraNscathetER Valve Trial) trials awaited. Emerging evidence from trials such as AVATAR (Aortic Valve Replacement Versus Conservative Treatment in Asymptomatic Severe Aortic Stenosis) and RECOVERY (Early Surgery Versus Conventional Treatment in Very Severe Aortic Stenosis) suggests that early intervention for severe aortic stenosis (AS), before patients develop symptoms, may be of benefit . In a pooled analysis of key trials (PARTNER2A, 2B &3) involving 1974 patients (mean age 81 years; 45% women), Généreux et al. evaluated the relationship between cardiac damage at baseline and prognosis in patients with severe symptomatic AS who underwent AVR (40% SAVR, 60% TAVI) . Baseline cardiac damage was defined using a 0–4 scoring system (0 = no damage and 4 = biventricular failure). Baseline damage correlating strongly with 2-year mortality (HR 1.51 per higher stage; 95% CI 1.32–1.72) with each increase in stage conferred a 24% increase in mortality ( P = 0.001) (from stage 0 = 2.5% to stage 4 = 28.2%) suggesting a role for earlier intervention. Several ongoing trials, such as EARLY TAVR (Evaluation of TAVR Compared to Surveillance for Patients With Asymptomatic Severe Aortic Stenosis), TAVR UNLOAD (Transcatheter Aortic Valve Replacement to UNload the Left Ventricle in Patients With ADvanced Heart Failure) and PROGRESS (Management of Moderate Aortic Stenosis by Clinical Surveillance or TAVR), aim to answer these questions directly. Valve in valve (VIV) TAVR is being increasing utilised in patients with failed AVR; however, it remains unclear whether these patients do better with or without balloon valve fracture (BVF). In a registry analysis of 2975 patients undergoing VIV-TAVR (with balloon-expandable SAPIEN 3 or SAPIEN 3 Ultra) between December 2020 and March 2022, Garcia et al. reported that BVF versus no BVF led to larger mean valve area (1.6 vs. 1.4 cm2; P < 0.01) and lower mean valve gradient (18.2 vs. 22.0 mm Hg; P < 0.01) but also to higher rates of death or life-threatening bleeding (OR 2.55; 95% CI 1.44–4.50) and vascular complications (OR 2.06; 95% CI 0.95–4.44). However, sub-analysis suggested the increase in mortality was mainly if BVF undertaken before VIV-TAVR (OR 2.90; 95% CI 1.21–6.94), whereas no difference was noted if undertaken after VIV-TAVR. This suggests that VIV-BVF should only be performed once the operator has a new TAVR in place. While designed primarily for AS, conventional TAVR devices have sometimes utilised for the treatment of severe aortic regurgitation (AR). The novel trilogy heart valve system, specifically developed for AR, and was evaluated in 45 patients (mean age 77, 40% female, mean Euroscore 7.1%) with moderate to severe AR by Tamm et al. . The primary endpoint, a reduction in ≥ 1 AR grade, was met in 100% of cases. There were no episodes of stroke, death, or conversion to open surgery, but 9 patients (23%) required permanent pacing. Subclinical leaflet thrombosis (SLT) is a relatively common complication of TAVR; however, the optimal treatment strategies, whether with anticoagulation or antiplatelets, remain contested. The multicentre ADAPT TAVR (Edoxaban vs. DAPT in reducing subclinical leaflet thrombosis and Cerebral Thromboembolism After TAVR) randomised 229 patients (mean age 80.1 years; 41.9% men) undergoing TAVR for symptomatic severe AS, and without other indication for OAC, to edoxaban 60 mg or 30 mg once daily versus DAPT with aspirin and clopidogrel . At 6 months, Edoxaban, by intention to treat analysis, was associated with a trend to reduced SLT as assessed by cardiac CT (9.8% vs. 18.4%; P = 0.076) and, in contrast to prior trials with DOAC post-TAVR, there was no difference in bleeding rates (11.7% vs. 12.7%; P = ns). Interestingly, a secondary per-protocol analysis focusing on patients with high compliance did reach statistical significance (19.1% vs. 9.1%; risk ratio 0.48; 95% CI 0.23–0.99). However, despite the use of serial brain MRI, there was no difference in the presence/number of cerebral lesions and no difference in neurocognitive outcomes including stroke at 6 months. Giustino et al. reported a new secondary analysis from the GALILEO trial (Rivaroxaban-based Antithrombotic Strategy to an Antiplatelet-based Strategy After TAVR to Optimize Clinical Outcomes) which, as described previously , had randomised 1644 patients post-TAVR without an indication for oral anticoagulation (OAC) to rivaroxaban 10 mg plus aspirin versus DAPT with aspirin plus clopidogrel for 90 days, but was stopped early due to higher thromboembolic bleeding and mortality events in the Rivaroxaban group . In the new analysis, thromboembolic events appeared to be associated with higher risk of mortality (HR 8.41; 95% CI 5.10–13.87) versus BARC 3 bleeding (HR 4.34; 95% CI 2.31–8.15). Furthermore, this mortality risk appeared higher than that conferred by known risk factors such as age (adjusted HR 1.04; 95% CI 1.01–1.08) and chronic obstructive pulmonary disease (COPD) (adjusted HR 2.11; 95% CI 1.30–3.41). These findings along with previous data from ALANTIS (AntiThrombotic Strategy After Trans-Aortic Valve Implantation for Aortic Stenosis) and ENVISAGE-TAVI AF (Edoxaban Compared to Standard Care After Heart Valve Replacement Using a Catheter in Patients With Atrial Fibrillation) show how the role of DOACs post-TAVI remains uncertain . However, given the devastating impact of thromboembolic events in this patient group, ongoing research is warranted. The absence of a bleeding signal with DOAC in ADAPT TAVR, in which most received lower dose Edoxaban, suggests that lower dose DOAC for a short duration while the valve is endothelialising may improve the risk/benefit ratio. Another area of current contention is the use of cerebral embolic protection (CEP) to reduce risk of stroke. While current guidance does not mandate use, some operators use in high-risk cases . Kaur et al. conducted a meta-analysis of 1,016 patients (mean age 81.3 years) from several randomised trials (DEFLECT III, MISTRAL-C, CLEAN-TAVI, SENTINEL, and REFLECT I and II) evaluating the TriGuard (Keystone Heart) and Sentinel devices versus standard care. At 30 days, CEP was not associated with a reduction in the primary outcome of all-cause stroke (RR 0.93; 95% CI 0.57–1.53), nor a reduction in mortality. Subsequently, the PROTECTED TAVR (Stroke PROTECTion With SEntinel During Transcatheter Aortic Valve Replacement) trial randomised 300 patients (mean age 72 years, 40% female) to CEP with a Sentinel device versus standard care . Again, no significant difference in primary outcome of stroke at 72 h was noted (2.4% vs. 2.9%, P = 0.30), although numbers were relatively small. The results of BHF PROTECT TAVI (British Heart Foundation Randomised Clinical Trial of Cerebral Embolic Protection in Transcatheter Aortic Valve Implantation) plans to enrol 7000 patients and findings are eagerly awaited.
The favourable findings in COAPT (Cardiovascular Outcomes Assessment of the MitraClip Percutaneous Therapy for Heart Failure Patients With Functional Mitral Regurgitation [MR) helped lead to device approval . However, it has been suggested the reason COAPT was favourable was the strict eligibility criteria, mandating LVEF ≥ 20% to ≤ 50%, left ventricular end-systolic dimension (LVESD) ≤ 70 mm and failure of aggressive medical therapy . EXPAND (A Contemporary, Prospective Study Evaluating Real-world Experience of Performance and Safety for the Next Generation of MitraClip Devices) was a prospective multicentre registry of 1,041 patients with site-reported MR 3 + /4 + were enrolled and received the MitraClip. A recent analysis compared 125 “COAPT-like” patients meeting COAPT inclusion criteria versus 128 “non-COAPT” patients. At 1 year, COAPT-like patients did not show any difference in the primary outcome of all-cause mortality (22.6% vs. 19.6%, P = 0.37) or heart failure hospitalisation (32.6% vs. 25%, P = 0.08). In keeping with their lower baseline MR severity, more non-COAPT patients achieved reduction in MR to mild or less (≤ 1 +) (97.2% vs. 86.5%), suggesting that Mitraclip may benefit patients beyond the strict COAPT criteria, but prospective randomised data are needed, such as the ongoing EVOLVE-MR (MitraClip for the Treatment of Moderate Functional Mitral Regurgitation). Previous data from CLASP (Edwards PASCAL TrAnScatheter Mitral Valve RePair System Study) and CLASPII have validated the safety and efficacy of the Edwards PASCAL™ transcatheter valve repair system . CLASP IID randomised 180 patients with severe degenerative symptomatic MR not eligible for surgery (mean age 81 years, 67% male, median STS 5.9%) to transcatheter Edge-to-Edge Repair (TEER) with the Pascal device (Edwards Lifesciences) vs. MitraClip (Abbott) device . At 30 days, the Pascal device met criteria for non-inferiority with respect to the composite endpoint of CV death, stroke, MI, renal replacement therapy, severe bleeding and re-intervention (3.4% vs. 4.8%; P for noninferiority < 0.05). Of interest, the proportion of patients with MR ≤ 1 + was durable in the PASCAL group (87.2% discharge vs. 83.7% at 6 months; P = 0.317); whereas MitraClip outcomes showed some loss of efficacy (88.5% discharge vs. 71.2% at 6 months; P = 0.003). Although only interim data, this hints that the Pascal device may have superior durability. ViV-transcatheter mitral valve replacement (ViV-TMRV) may be utilised in very high-risk patients without a surgical option on a case-by-case basis despite paucity of real-world outcome data. Bresica et al. retrospectively compared outcomes of 48 patients with bioprosthetic mitral valve (MV) failure undergoing ViV-TMRV (mean age 65 years, 63% female, mean STS 7.9%) versus 36 patients undergoing re-do MV surgery (mean age 58, 72% female, mean STS 7.1%) . ViV-TMVR was not associated with improvement in 1-year survival (90% vs. 80%, P = 0.33) and was associated with higher average postprocedural gradient (8.9 vs. 5.7 mm Hg; P < 0.001). Thus, ViV-TMRV is a good option for high-risk patients, but in less comorbid patients may not provide as good a long-term benefit as surgery, particularly in those with smaller original surgical valves. Data to come from the ongoing PARTNER 3 (Mitral Valve-in-Valve trial) will be useful to help guide decision-making in such patients. Several seminal trials, such as TRILUMMINATE (Abbott Transcatheter lip Repair System in Patients With Moderate or Greater TR), Triband (TranscatheterRepair of Tricuspid Regurgitation With Edwards Cardioband TR System Post-Market Study) and TRISCEND (Investigation of Safety and Clinical Efficacy After Replacement of Tricuspid Valve With Transcatheter Device), have led to a much greater focus on transcatheter tricuspid interventions . CLASP TR (Edwards PASCAL Transcatheter Valve Repair System Pivotal Clinical Trial), a prospective single-arm multicentre study, evaluated 1-year outcomes of the PASCAL transcatheter valve repair system in 65 patients (mean age 77 ± 9 years, 55% female, mean STS 7.7%) with severe tricuspid regurgitation (TR) . In keeping with the high baseline comorbidity, major adverse event rate was 16.9% ( n = 11) with all-cause mortality 10.8% ( n = 7) and 18.5% ( n = 12) re-admitted with heart failure. Paired analysis demonstrated significant improvements in New-York Heart Association (NYHA) grade ( P < 0.001), KCCQ score ( P < 0.001) and 6-min walk test (6MWT) ( P = 0.014). Importantly, the reduction in TR severity noted at 30 days ( P < 0.001) was maintained at 1 year (100% had ≥ 1 grade reduction and 75% had ≥ 2 grade reduction, P < 0.001). TRICLASP (Transcatheter Repair of Tricuspid Regurgitation With Edwards PASCAL Transcatheter Valve Repair System), a prospective, single-arm multicentre trial, evaluated 30-day outcomes in 67 of 74 patients (mean age 80 years, 58% female, mean STS 9%) undergoing the Pascal Ace transcatheter repair system for severe symptomatic inoperable TR (Fig. ). The primary composite outcome of major adverse events was 3% with 88% achieving ≤ 1 grade reduction in TR vs. baseline; P < 0.001), along with significant improvements in NYHA, KCCQ score, and 6MWT ( P < 0.001). Longer term follow-up data are awaited. TriClip-Bright (An Observational Real-world Study Evaluating Severe Tricuspid Regurgitation Patients Treated With the Abbott TriClip™ Device) study , a multicentre, prospective study reported 30-day outcomes for 300 patients (78 ± 7.6 years) undergoing the Triclip Transcatheter valve repair system (Fig. ). The primary endpoint of procedural success (survival to discharge) was met in 91%. Significant reductions in both NYHA and KCCQ score were noted at ( P < 0.001). The trial is still actively recruiting, with a planned follow-up duration of 1 year.
While definitive studies to guide patent foramen ovale (PFO) closure practice are still lacking, a multidisciplinary consensus statement by SCAI was published this year recommending closure in patients aged 18–60 with a PFO-associated stroke, platypnoea-orthodeoxia syndrome with no other cause, and systemic embolism with no other cause. Of note in the absence of PFO-associated stroke, the guidance does not recommend PFO closure in transient ischaemic attack, AF with ischaemic stroke, migraine, decompression illness or thrombophilia. Several left atrial appendage closure (LAAC) devices have been approved in recent years with favourable long-term data published last year for the Watchman LAAC device (Boston Scientific). The AMULET IDE trial (Amplatzer Amulet Left Atrial Appendage Occluder Versus Watchman Device for Stroke Prophylaxis) trial randomised patients with non-valvular atrial fibrillation (AF), not suitable as anticoagulation to LAAC with an Amulet device ( n = 934) versus Watchman device ( n = 944). At 3 years, there was no difference in the primary composite endpoint of CV mortality, ischaemic stroke or systemic embolism (11.1% vs. 12.7%, P = 0.31) all-cause mortality (14.6% vs. 17.9%; P = 0.07) or major bleeding (16.1% vs. 14.7%; P = 0.46). Similarly, updated data from the US LAAC registry, comparing the Watchman FLX to its previous iteration, the Watchman 2.5, was published this year by Freeman et al. who reported US LAAC registry outcomes from 54,206 patients (mean age 76 years; 59% men) undergoing LAAC with the new Watchman FLX ( n = 27,103) versus previous Watchman 2.5 ( n = 27,103). In-hospital major adverse events were significantly lower with the new Watchman FLX (1.35% vs. 2.4%, OR 0.57: 95% CI 0.50–0.65) driven by reductions in pericardial effusion requiring intervention (0.42% vs. 1.23%), device embolization (0.02 vs. 0.06%) and major bleeding (1.08% vs. 2.05%). Longer follow-up will help clarify if technical aspects between devices confer long-term clinical outcome advantages. Despite the evolution of device technology for LAAC, key clinical questions, such as anticoagulation strategy, remain. Freeman et al. conducted a US LAAC registry analysis of 31,994 patients who underwent Watchman LAAC between 2016 and 2018. Only 12.2% of patients received the full anticoagulation protocol mandated by clinical trials (Fig. ). In contrast to previous European reports from EWOLUTION (Registry on WATCHMAN Outcomes in Real-Life Utilization), the 45-day adjusted adverse event rate was longer if discharged on warfarin alone (HR 0.692; 95% CI 0.569–0.841) or DOAC alone (HR 0.731; 95% CI 0.574–0.930) versus warfarin plus aspirin, suggesting that further research is needed to guide the optimal antithrombotic strategy post-LAAC.
The ISCHAEMIA trial (Initial Invasive or Conservative Strategy for Stable Coronary Disease) was a previously reported that routine invasive therapy versus optimal medical therapy (OMT) in stable patients with moderate ischaemia did not reduce major adverse events (MAE), but the possibility of excess events over longer follow-up was queried. The ISCHAEMIA-EXTEND study (median follow-up 5.7 years) reported that while there was still no difference in all-cause mortality in routine invasive versus medical therapy (12.7% vs. 13.4%, P = 0.74), after 2 years the survival curves for cardiovascular (CV) death started to diverge and by 7 years were significantly lower in the routine invasive group (6.4% vs. 8.6% HR 0.78; 95% CI 0.63, 0.96). Conversely, there was an increase in non-CV death in the routine invasive group (5.5% vs. 4.4%, HR 1.44; 95% CI 1.08–1.91). On balance, this still supports an initial OMT strategy but highlights the utility of understanding anatomy to risk stratify and perhaps identify those patients who will benefit the most from CV risk reduction (Fig. ) . Ten-year follow-up data will prove informative. New onset, stable chest pain remains a substantial burden on healthcare systems. SCOT-HEART (Scottish COmputed Tomography of the HEART Trial) and PROMISE (PROspective Multicenter Imaging Study for Evaluation of Chest Pain) previously reported benefit in early computed tomography coronary angiogram (CTCA) for the evaluation of stable chest pain. FFR-CT may further improve CT diagnosis. PRECISE (Prospective Randomized Trial of the Optimal Evaluation of Cardiac Symptoms and Revascularization) randomised 2103 patients (mean age 58 years, 50% women) with suspected CAD to a risk scoring algorithm (with low-risk patients deferred and high-risk patients undergoing FFR-CT) versus standard care. At a median follow-up of 11.8 months, algorithm-guided use of FFR-CT resulted in markedly lower MACE (4.2% vs. 11.3%; adjusted HR 0.29; 95% CI 0.20–0.41), driven by a lower rate of catheterisation (4.2% vs. 11.3%; adjusted HR 0.29; 95% CI 0.20–0.41). There was no difference in all-cause death. A subsequent cost-effectiveness analysis is ongoing. Despite current advances in ACS detection, prediction of recurrent events remains difficult. Batra et al. assessed the predictive valve of biomarker modelling (with hs-TNT, CRP, DGF-15, cystatin C, NT-proBNP) from 14,221 patients enrolled in PLATO (A Comparison of Ticagrelor and Clopidogrel in Patients With Acute Coronary Syndrome) and TRACER (Trial to Assess the Effects of Vorapaxar (SCH 530,348; MK-5348) in Preventing Heart Attack and Stroke in Participants With Acute Coronary trials. An outcome model termed “ABC-ACS Ischaemia” predicted 1-year risk of CV death/MI with C-indices of 0.71 and 0.72 in the development and validation cohorts, respectively. While encouraging, such models likely need to be integrated with additional individual patient characteristics in improve risk prediction. Optical Coherence Tomography (OCT) has demonstrable utility in assessing plaque morphology and so may be useful in delineating between different aetiologies of ACS. The Tokyo, Kanagawa, Chiba, Shizuoka, and Ibaraki active OCT applications for ACS (TACTICS) registry, evaluated plaque morphology in 702 ACS patients undergoing OCT-guided PCI and reported rupture was the commonest aetiology (59%), followed by plaque erosion (26%), and then calcification (4%) ( Fig. ) . However, at 12 months, calcified nodules conferred the worst outcome with a 32.1% MACE rate compared to 12.4% and 6.2% amongst ruptures or erosions, respectively.
Strategies to shorten DAPT duration post-PCI in high bleeding risk patients continue to be evaluated. Longer-term follow-up at 15 months of the MASTER DAPT (Management of High Bleeding Risk Patients Post-Bioresorbable Polymer Coated Stent Implantation With an Abbreviated Versus Prolonged DAPT Regimen) confirmed initial results , with the incidence of the composite endpoint (death, MI, stroke, major bleeding) remaining non-inferiority for shortened DAPT versus standard care (HR 0.92, 95% CI 0.76–1.12; P = 0.40), but a significantly lower rate of major bleeding in the short DAPT group (HR 0.68, 95% CI 0.56–0.83; P = 0.001). These data, although important, were applied in the context of contemporary stent design such as the biodegradable-polymer sirolimus-eluting Ultimaster stent (Terumo) as used in MASTER DAPT. Effective reversal of antiplatelets could be helpful when active bleeding risk outweighs ischaemic risk, particularly in elderly patients. No formal antiplatelets reversal agents are currently licensed; however, an interesting drug under investigation is Bentracimab—a recombinant IgG1 monoclonal antibody antigen-binding fragment that binds with high affinity to ticagrelor and its active metabolite. Bhatt et al., in a phase IIb trial, randomised 205 patients (mean age 61 years, female 50%) already treated with DAPT for 30 days to Bentracimab ( n = 154) versus placebo ( n = 51). Use of Bentracimab was associated with a significant reduction in the primary endpoint of percentage inhibition of P2Y12 reaction units at 4 h ( P < 0.0001) without any excess of thrombotic events or deaths . Further larger-scale phase III trials are eagerly awaited. In patients with an indication for antiplatelet monotherapy, previous studies have suggested a possible benefit for clopidogrel versus aspirin at least in certain patient subgroups. PANTHER (P2Y12 inhibitor vs. aspirin monotherapy in patients with coronary artery disease) was a meta-analysis of several large, randomised trials totalling 24,325 patients with established coronary artery disease (mean age 64 years, 22% women) which compared P2Y12 inhibition (62% clopidogrel, 38% ticagrelor) versus aspirin . Use of P2Y12 inhibition was associated with a 12% reduction in the primary composite outcome of CV death, MI or stroke at 18 months (5.5% vs. 6.3%; HR 0.88; 95% CI 0.79–0.97) driven by a lower risk of MI (HR 0.77; 95% CI 0.66–0.90), but with no difference in stroke (HR 0.85; 95% CI 0.70–1.02) or bleeding (6.4% vs. 7.2%; HR 0.89; 95% CI 0.81–0.98). While firm conclusions are difficult due to the inclusion of 2 different P2Y12 inhibitors, it suggested P2Y12 inhibitor may be warranted instead of aspirin for long-term secondary prevention in patients with coronary artery disease. Indobufen is a reversible COX inhibitor with similar anti-thrombotic effects to aspirin but less gastrointestinal side effects and potentially lower risk of bleeding . The OPTION (the Efficacy and Safety of Indobufen and Low-dose Aspirin in Different Regimens of Antiplatelet Therapy) trial randomised 4,551 patients (mean age 61 years; 65% male) without acute troponin rise, undergoing PCI with DES to 1 year of DAPT (indobufen 100 mg BD plus clopidogrel 75 mg; n = 2258 vs. aspirin plus clopidogrel 100 mg OD; n = 2293). At 1 year, use of indobufen versus aspirin meet non-inferiority with respect to the primary composite outcome (CV death, MI, stroke, ISR and BARC type 2,3 or 5 bleeding) (4.47% vs. 6.11%; HR 0.73; 95% CI 0.56–0.94; P < 0.001 for noninferiority). The secondary safety endpoint of BARC 2, 3 or 5 bleeding was lower with indobufen (2.97% vs. 4.71%; HR 0.63; 95% CI 0.46–0.85), driven by a reduction in BARC 2 bleeding (1.68% vs. 3.49%; P < 0.001). These intriguing data suggest a potential new treatment option particularly for patients with gastrointestinal bleeding or aspirin allergy. Full dose anticoagulation plus antiplatelet therapy significantly increases bleeding risk but the role of low-dose anticoagulation for vascular prevention continues to be studied. Asundexian is a novel oral activated factor XI inhibitor which may lower thromboembolic events but with lower bleeding risk . In the phase II PACIFIC-AMI trial (Study to Gather Information About the Proper Dosing and Safety of the Oral FXIa Inhibitor BAY 2,433,334 in Patients Following an Acute Heart Attack), 1601 patients (median age 68 years, 23% women) with recent acute MI were randomised to asundexian (10 mg, 20 mg or 50 mg) versus placebo in addition to standard DAPT. At 4 weeks, asundexian was not associated with a significant increase in the pre-specified safety outcome of BARC2 bleeding versus placebo 0.98 (90% CI, 0.71–1.35), although there was a numerical increase in bleeding with higher asundexian doses. Based on this trial, asundexian 50 mg daily is being considered for a phase III cardiovascular outcomes trial in acute MI. Asundexian was also evaluated in the phase IIb PACIFIC-STROKE trial (Study to Gather Information About the Proper Dosing and Safety of the Oral FXIa Inhibitor BAY 2,433,334 in Patients Following an Acute Stroke) which randomised 1808 patients with non-embolic ischaemic stroke to asundexian (10 mg, 20 mg or 50 mg) versus placebo in addition to standard care including antiplatelet therapy . Asundexian (whether by pooled or individual dose analysis) was not associated with reduction in the primary efficacy outcome of ischemic stroke or overt stroke at 6 months, although the primary safety outcome of major significant bleeding was not significantly different [asundexian pooled vs. placebo HR1·57 (90% CI 0·91–2·71)]. It thus remains unclear if asundexian has a useful role in ischaemic stroke. In current PPCI guidelines, Bivalirudin (Class IIa) was replaced by unfractionated heparin (UFH) (Class I) as previous studies reported equipoise in clinical outcomes but more difficult drug administration with Bivalirudin. BRIGHT-4 (Bivalirudin With Prolonged Full Dose Infusion Versus Heparin Alone During Emergency PCI) randomised 6,016 PPCI patients from 63 Chinese centres in open-label fashion to Bivalirudin bolus plus infusion for a median of 3 h versus UFH bolus . Patients underwent predominantly radial PPCI (93%) without any prior thrombolytic, anticoagulant or glycoprotein inhibitor treatment. At 30 days, Bivalirudin was associated with a 31% reduction in the primary outcome of all-cause or BARC 3–5 bleeding (HR 0.69; 95% CI 0.53–0.91, P = 0.007), reduced BARC 3–5 bleeding (HR 0.21; 95% CI 0.08–0.54), reduced all-cause mortality (3.0% vs. 3.6%, P = 0.04), and reduced stent thrombosis (0.4% vs. 1.1%, P = 0.0015). Despite these favourable data, given the inherent difficulties in bivalirudin delivery and moderate increase in cost versus UFH, it is unclear if BRIGHT-4 findings will change practice, although a stronger guideline recommendation would be expected. Tongxinluo (TXL) is a traditional Chinese medicine, approved in China for the treatment of stroke and angina . CTS-AMI (China Tongxinluo Study for Myocardial Protection in Patients With Acute Myocardial Infarction) was a randomised trial of 3755 patients with STEMI undergoing PPCI at 124 Chinese centres to TXL versus placebo (in addition to standard therapy). Use of TXL was associated with a 36% reduction in the primary composite outcome of CV death, revascularisation, MI and stroke at 30 days (3.39% vs. 5.25%; RR 0.64; 95% CI 0.47–0.88) and a 30% reduction in cardiac death (2.97% vs. 4.24%; RR 0.70; 95% CI: 0.50–0.99). While the findings are dramatic, further work is necessary to understand the mechanism of action of this novel drug and further randomised multicentre trials to confirm efficacy.
Following on from the HIS-Alternative trial (His Pacing Versus Biventricular Pacing in Symptomatic HF With Left Bundle Branch Block) , which reported similar outcomes with His-Bundle CRT (His-CRT) versus conventional biventricular CRT (BiV-CRT), the LBBP-RESYNC (Left Bundle Branch Versus Biventricular Pacing For Cardiac Resynchronization Therapy) trial randomised 40 patients with non-ischaemic cardiomyopathy, LBBB and an indication for resynchronisation to left bundle branch CRT (LBB-CRT) versus standard BiV-CRT pacing . LBB-CRT was associated with a larger improvement in LVEF at 6 months (21.1% vs. 15.6%; P = 0.039, 95% CI 0.3–10.9), greater reduction in LV end systolic volumes and greater reduction NT-proBNP ( Fig. ). Vijayaraman et al. presented a retrospective analysis of 477 patients comparing those who underwent conduction pacing (LBB pacing or His-bundle) versus conventional BiV-CRT. Conduction pacing was associated with a lower incidence of the primary composite of death or heart failure hospitalisation (28.3% vs. 38.4%; P = 0.013), mainly driven by a reduction in HF hospitalisations. Vijayaraman et al. also presented a retrospective analysis of 212 patients with rescue LBB pacing who met indications for CRT but had coronary venous lead failure or were non-responders to BiV-CRT . LBB pacing (successful in 94%) was associated with improvement in LVEF from 29% at baseline to 40% at follow-up ( P < 0.001) ( Fig. ). The MELOS (Multicentre European Left Bundle Branch Area Pacing Outcomes Study) registry evaluated 2533 patients from 14 European centres undergoing transseptal left bundle branch area pacing (LBBAP), 27.5% for heart failure and 72.5% for bradycardia . LB fascicular capture was most common (69.5%) followed by LV septal capture (21.5%) then proximal LBB capture (9%). Overall complication rate was 11.7%, including ventricular trans-septal complications in 8.3%. Overall, these trials collectively support the efficacy and safety of conduction system pacing as a suitable alternative to conventional BiV-CRT, although larger randomised trials are required to formally test superiority. Infections related to cardiac implanted electronic devices (CIEDs) have high mortality and morbidity, and the European heart rhythm association (EHRA) consensus advises prompt extraction . Pokornery et al. analysed a Medicare database of 11,619 patients admitted with a CIED infection of whom only 2,109 (28.2%) had device extraction within 30 days. Device extraction versus no extraction was associated with reduction in 1-year mortality (HR 0.79, 95% CI 0.70–0.81) and early device extraction within 6 days versus no extraction was associated with a 41% reduction in 1-year mortality ( P < 0.001). Subcutaneous ICDs (S-ICDs) have been evaluated in previous trials including PRAETORIAN and UNTOUCHED as an alternative to transvenous systems for patients at risk of lead complications or infections. The ATLAS -ICD (Avoid Transvenous Leads in Appropriate Subjects) trial randomised 593 patients with an indication for ICD to SC-ICD versus transvenous ICD (TV-ICD) implantation . SC-ICD was associated with a 92% reduction in perioperative lead complications at 6 months (0.4% vs. 4.8%; OR 0.08; 95% CI 0.00–0.55), although the composite safety outcome (including the primary outcomes plus device-related infection requiring surgical revision, significant wound hematoma requiring evacuation or interruption of oral anticoagulation, MI, stroke/TIA, or death) was similar (4.4% vs. 5.6%; OR 0.78, 95% CI 0.35–1.75) and inappropriate shocks were non-significantly more common (2.7% vs. 1.7%; HR2.37, 95% CI 0.98–5.77). In heart failure patients, there is contradictory evidence whether defibrillator capability improves prognosis in patients receiving CRT. RESET-CRT (Re-evaluation of Optimal Re-synchronization Therapy in Patients with Chronic Heart Failure) retrospectively compared outcomes in 847 CRT-P versus 2722 CRT-D patients undergoing CRT (of whom 27% had a non-ischaemic aetiology and exclusion criteria included recent ACS, revascularisation, or any indication for secondary prevention ICD). The primary endpoint of all-cause mortality at 2.35 years follow-up (adjusted for age and entropy balance) was non-inferior for CRT-P versus CRT-D (HR 0.99, 95% CI 0.81–1.20), suggesting no mortality benefit with defibrillator capability in this population. Atkas et al. compared propensity matched outcomes of 535 patients with ICD versus 535 patients without ICD from the Empagliflozin arm of the Emperor-Reduced trial . Those with ICD versus no ICD had non-significantly lower mortality (HR 0.74, 95% CI 0.51–1.07, P = 0.114) and sudden cardiac death (HR 0.59, 95% CI 0.31–1.15, P = 0.122). However, despite propensity matching, the results were confounded by differences in medical therapy between groups, with more ICD patients receiving B-blockers and ARNIs but fewer receiving ACE-I/ARBs and MRAs.
The VANISH (Ventricular Tachycardia Ablation versus Escalation of Antiarrhythmic Drugs) trial previously demonstrated superiority with regards to mortality, VT storm and appropriate ICD shocks of catheter ablation versus escalated AAD therapy in patients with previous MI and VT . A new sub-analysis compared shock-treated VT events and appropriate shock burden between the 2 groups. Catheter ablation was associated with a significant reduction in shock-treated VT events (39.07 vs. 64.60 per 100 person-years; HR 0.60; 95% CI 0.38–0.95) and total shock burden (48.35 vs. 78.23; HR 0.61; 95% CI 0.37–0.96). Prediction risk of sudden cardiac death (SCD) after MI has typically guided by LVEF < 35%, but many patients with LVEF < 35% who receive ICD never require it, whereas some with higher LVEF are still at risk of SCD. The additional predictive value of CMRI, in particular core scar size and grey zone size, for the PROFID risk prediction model was investigators in 2,049 patients imaged > 40 days post-MI . In the subgroup without ICD, use of CMRI data versus no CMRI data significantly improved prediction of SCD [area under curve (AUC) of model 0.753 vs. AUC 0.618]. In the subgroup with ICD, addition of CMRI data did not significantly improve prediction of SCD (AUC 0.598 vs. 0.535). This suggests CMRI may be useful to risk stratify post-MI and guide ICD use but further prospective studies are required. The SMART-MI-ICM trial previously reported that, in post-MI patients with EF 35–50%, implantable cardiac monitor (ICM) use versus control was associated with higher rates of arrhythmia detection although the clinical significance was unclear. The BIOGUARD-MI (BIO monitorinG in Patients With Preserved Left ventricUlar Function AfteR Diagnosed Myocardial Infarction ) trial aimed to assess the clinical value of arrhythmia detection on ICM, by randomising 804 patients with NSTEMI/STEMI to ICM versus standard care. Use of ICM was not associated with an overall significant reduction in the primary composite endpoint of CV death or hospitalisation at 2.5 years (HR 0.84, P = 0.21, 95% CI 0.64–1.10), although a reduction was noted in the NSTEMI subgroup (HR 0.69, 95% CI 0.49–0.98). This subgroup observation can only be hypothesis generating but is plausible given the more complex and co-morbid nature of a NSTEMI population.
While smartwatches may improve detection of atrial fibrillation (AF), including asymptomatic AF, previous studies have reported high false positive rates. The mAF-App II trial, which used Huawei smartwatch photoplethysmography, reported data from 2.8 million people in China who downloaded the app . During 4 years follow-up, 12,244 (0.4%) people received a query AF notification, 5,227 attended for clinical evaluation with ECG and 24-h Holter monitoring and, within this group, AF was confirmed in 93.8%. This suggests much better specificity than previous studies, although the notification rate was lower than some studies, reflecting the relatively young population, and clinical data were not available for the 7017 people who received a notification but did not attend for evaluation. Unlike previous Apple, Fitbit and Huawei studies, E-Brave used the Preventicus smartphone app and invited 67,488 policyholders of a German health insurance scheme to participate, of whom 5,551 met inclusion criteria and agreed to enroll (AF naïve, median age 65 years; 31% female; median CHA2DS2-VASc of 3) and were randomised to active AF screening (photoplethysmogram [PPG] for 1 min twice per day for 2 weeks then twice weekly for 6 months, plus 2-week loop recorder if abnormal PPG) versus standard care. At 6 months, those in the active arm had double the rate of AF detection requiring OAC treatment (1.33 vs. 0.63%; OR 2.12; 95% CI 1.19–3.76). After 6 months, those without a new AF diagnosis were invited to cross-over to the opposite study arm, and, after a further 6 months, active screening with the app again doubled the detection and treatment of AF (1.38% vs. 0.51%; OR 2.75; 95% CI 1.42–5.34). Given the widespread availability of smartphones particularly in higher-risk populations, this may be a useful public health intervention, although further prospective studies are required to evaluate clinical outcomes of treating AF detected in this fashion. AF has been widely associated with increased risk of dementia and better control of AF may reduce this risk. Zeitler et al. using the Optum Clinformatics database, evaluated the propensity-matched risk of dementia in 19,088 patients following catheter ablation versus 19,088 patents treated with antiarrhythmic drugs (AAD) for AF . Catheter ablation was associated with a 41% reduction in risk of dementia (HR 0.59; 95% CI 0.51–0.68; P < 0.0001) and a 49% reduction in the secondary endpoint of mortality (HR 0.51, 95% CI 0.46–0.55, P < 0.001), supporting the value of effective AF treatment in this population. The Augustus trial previously reported the benefit of apixaban instead of vitamin-K antagonist (VKA) and ongoing P2Y12i monotherapy rather than DAPT for patients with AF and ACS/PCI . Harskamp et al. undertook a new analysis of 4,386 patients from Augustus to assess if benefits varied depending on baseline HASBLED (≤ 2 vs. ≥ 3) and CHAD 2 S 2 VASc (≤ 2 vs. ≥ 3) scores . Apixaban was associated with lower bleeding versus VKA irrespective of baseline risk [HR: 0.57 (HAS-BLED ≤ 2), HR 0.72 (HAS-BLED ≥ 3); interaction P = 0.23] and lower risk of death or hospitalization (HR 0.92 (CHA 2 DS 2 -VASc ≤ 2); HR 0.82 (CHA 2 DS 2 -VASc ≥ 3); interaction P = 0.53]. Aspirin versus placebo increased bleeding irrespective of baseline risk [HR: 1.86 (HAS-BLED ≤ 2); HR: 1.81 (HAS-BLED ≥ 3); interaction P = 0.88] with no significant difference in death or hospitalization [HR: 1.09 (CHA 2 DS 2 -VASc ≤ 2); HR: 1.07 (CHA 2 DS 2 -VASc ≥ 3); interaction P = 0.90]. The INVICTUS (Investigation of Rheumatic AF Treatment Using Vitamin K Antagonists, Rivaroxaban or Aspirin Studies) trial , randomised 4565 patients with rheumatic mitral valve and at high risk (CHAD 2 S 2 VASc ≥ 2, mitral valve area ≤ 2cm 2 , left atrial spontaneous contrast or thrombus) to Rivaroxaban versus VKA. Rivaroxaban was associated with increased incidence of the primary composite endpoint of stroke, systemic embolus, MI, or death from vascular/unknown cause (560 vs. 446 events; HR 1.25, 95% CI 1.10–1.41) despite suboptimal VKA control (only 33.2% having at appropriate INR enrolment, and the time in therapeutic range (TTR) being only 56–65% during follow-up). Rivaroxaban was also associated with a 37% increased risk of stroke and 23% increased risk. Thus, for AF and rheumatic mitral valve disease, VKA remains preferable to rivaroxaban. Previous studies reported that high-power, short duration (HPSD) versus conventional radiofrequency ablation (RFA) for AF was more effective with similar safety . The POWER FAST III (High Radiofrequency Power for Faster and Safer Pulmonary Vein Ablation) trial randomised 267 patients with AF to HPSD versus conventional RFA . HPSD was associated with a reduced ablation time but no difference in the primary efficacy outcome of freedom of atrial arrhythmia (99.2% vs. 98.4% in right pulmonary veins, 100% vs. 100% in left pulmonary veins) or the primary safety outcome of oesophageal lesions at endoscopy (7.5% vs. 6.5%; P = 0.94). Both conventional RFA and cryoablation for pulmonary vein isolation induce injury to neurocardiac structures (nerves and ganglia) which may be detected may release of S100b levels and post-procedure rise in heart rate . The technique of pulsed field ablation (PFA) may reduce neurocardiac trauma. Lemoine et al. randomised 56 patients to PFA versus cryoablation for AF. In those treated with PFA versus cryoablation, troponin I levels were 3 times higher ( P < 0.01), indicating more myocardial injury, but S100b levels were 2.9 times lower ( P < 0.001), and there was no increase in post-procedural heart rate (vs. marked increase with cryoablation; P < 0.01), indicating less neurocardiac damage with PFA. In addition, procedural success and durability of PFA appears encouraging. Keffer et al. evaluated 41 patients undergoing pulmonary vein PFA . The primary outcome of AF > 30 s or atrial tachycardia after a 30-day blanking period detected on 7-day Holter monitoring at 3 and 6 months occurred in 5 patients, of whom 3 underwent redo ablation during which all pulmonary veins were found to be still isolated. EAST-AFNET 4 previously reported a benefit of early rhythm control versus standard care in patients with AF , but there has been a paucity of data regarding initial ablation in such patients. In PROGRESSIVE-AF (a 3-year follow-up of the EARLY-AF trial), 303 patients with newly diagnosed symptomatic paroxysmal AF were randomised to upfront ablation versus AAD . Ablation was associated with a 75% reduction in the primary outcome of progression to persistent AF/flutter/tachycardia requiring cardioversion (1.9% vs. 7.4%; HR 0.25; 95% CI 0.09–0.70), a 49% reduction in any atrial arrhythmia > 30 s (56.5% vs. 77.2%; HR 0.51; 95% CI 0.38–0.67), a 69% reduction in hospitalisations (5.2% vs. 16.8%; RR 0.31; 95% CI 0.14–0.66) and 53% reduction in adverse effects (11% vs. 23.5%; RR 0.47; 95% CI 0.28–0.79). Use of botulinum toxin A to reduce AF was assessed in the NOVA (NeurOtoxin for the PreVention of Post-Operative Atrial Fibrillation) study which randomised 323 patients undergoing cardiac (bypass and/or valve) surgery to epicardial botulinum toxin A (125 units or 250 units) versus placebo . Overall, botulinum 125 units or 250 units versus placebo was not associated with a reduction in the primary outcome of AF > 30 s at 30 days (RR 0.80; 95% CI 0.58–1.10 and RR 1.04; 95% CI 0.79–1.37), respectively, although in the patient subgroup > 65 years, botulinum 125 units was associated with AF reduction (RR 0.64; 95% CI 0.43–0.94) which may be considered hypothesis-generating and warrant further study. Etripamil is a novel non-dihydropyridine calcium channel blocker, which may be given as a nasal spray, for acute treatment of patients with paroxysmal supraventricular tachycardia (PSVT) or AF. The RAPID (Efficacy and Safety of Etripamil for the Termination of Spontaneous PSVT) study screened 706 patients with PSVT ultimately assigning in random fashion 135 patients to etripamil versus 120 to placebo. Etripamil was associated with more than double the primary outcome of conversion to sinus rhythm within 30 min (64.3% vs. 31.2%; HR 2.62; 95% CI 1.66–4.15) and a median time to conversion of 17 min (almost 3 times quicker than placebo).
Previous studies have shown the selective cardiac myosin activator Omecamtiv Mecarbilon may improve CV outcomes in HFrEF patients . To assess functional impact, the METEORIC-HF (Effect of Omecamtiv Mecarbil on Exercise Capacity in Chronic Heart Failure With Reduced Ejection Fraction) trial randomised 276 patients with LVEF ≤ 35%; NYHA II-III (in 2:1 fashion) to Omecamtiv Mecarbilon versus placebo for 20 weeks, in addition to standard therapy. Surprisingly, despite good tolerability and the previous favourable CV outcome data, Omecamtiv Mecarbilon was not found to improve exercise capacity (assessed by peak oxygen uptake on cardiopulmonary exercise stress testing). A major stumbling block in optimising HF medications can be hyperkalaemia. Patiromer, a non-absorbed sodium-free potassium-binding polymer increases faecal potassium excretion. The DIAMOND (Patiromer for the Management of Hyperkalemia in Subjects Receiving RAASi for HFrEF) trial randomised 1642 patients with HFrEF and renin–angiotensin–aldosterone system inhibitor (RAASi)-related hyperkalaemia to Patiromer versus placebo. Over a period of 13–42 (mean 27) weeks, Patiromer was associated with less increase in potassium (adjusted mean change + 0.03 vs. + 0.13 mmol/l; 95% CI –0.13 to 0.07; P < 0.001). The risk of hyperkalamia and need for reduction of MRA dose were numerically (although not statistically) lower. These important findings support Patiromer being incorporated in local HF protocols. Implementation of HF guidelines can be hampered by many factors. PROMPT-HF (PRagmatic trial of Messaging to Providers about Treatment of Heart Failure) randomised 1310 patients with HFrEF, not already taking all four pillars of therapy to a strategy of targeted, tailored electronic healthcare record alerts to optimise guideline-directed medical therapy (GDMT) versus standard care. The electronic alert strategy was associated with a significant increase in the number of drug classes prescribed at 30 days (26% vs. 19%; adjusted RR 1.41; 95% CI: 1.03–1.93; P = 0.03; number needed to alert = 14). In an impressive attempt to improve secondary prevention therapy delivery, the SECURE (Secondary Prevention of Cardiovascular Disease in the Elderly Trial) trial randomised 2499 patients with MI ≤ 6 months to an open label polypill, comprising aspirin 100 mg, ramipril (2.5, 5 or 10 mg) and atorvastatin ( or mg), versus standard care. At 3-year follow-up, use of the polypill was associated with a 24% reduction in the primary endpoint of CV death, type 1 MI or ischaemic stroke (9.5% vs. 12.7%; HR 0.76, 95% CI: 0.6–0.96; P = 0.02). Sodium-glucose cotransporter-2 inhibitors (SGLT2i) trials continue to dominate HF research. A meta-analysis of 13 SGLT2i trials involving 90,413 participants (82 reported a 37% reduction in risk of progressive renal dysfunction 37% (RR 0·63, 95% CI 0·58–0·69) and a 23% reduction in risk of CV death or HF hospitalisation (RR 0·77; 0·74–0·81). Effects were similar in diabetics versus non-diabetics and regardless of baseline renal function (Fig. ). When first introduced and before reno-protective properties became clear, SGLT2i use was restricted to patients with eGFR > 60 to optimise glycaemic control. EMPA-KIDNEY (Study of Heart and Kidney Protection With Empagliflozin) randomised 6609 patients with impaired renal function (eGFR 20 to < 45, or eGFR 45 to < 90 plus urinary albumin-to-creatinine ratio > 200) to empagliflozin versus placebo. At 2 years, empagliflozin was associated with a 28% reduction in the primary endpoint of progression of kidney disease (defined as end-stage kidney disease, eGFR < 10, decrease in eGFR ≥ 40% from baseline, death from renal causes) or CV death (13.1% vs. 16.9% of the control group (HR 0.72; 95% CI: 0.64–0.82; P < 0.001). The EMPULSE (Empagliflozin in Patients Hospitalized for Acute Heart Failure) trial randomised 530 acutely decompensated patients hospitalised with HF, regardless of ejection fraction or diabetic status to Empagliflozin versus placebo. Those with IV vasodilators, IV inotropes, requiring increasing IV diuretic doses, cardiogenic shock or recent ACS were excluded. Empagliflozin versus placebo was more frequently associated clinical benefit in the primary composite endpoint of death, number of HF events, time to first HF event, and change in Kansas City Cardiomyopathy Questionnaire-Total Symptom Score at 90 days (stratified win ratio 1.36; 95% CI 1.09–1.68; P = 0.0054) ( Fig. ). The DELIVER (Dapagliflozin in Heart Failure with Mildly Reduced or Preserved Ejection Fraction) study randomised 6263 hospitalised or recently hospitalised patients with HF and LVEF > 40% to dapagliflozin versus placebo. Dapagliflozin was associated with an 18% reduction in the primary endpoint of death or worsening HF (16.4% vs. 19.5%; HR 0.82, 95% CI 0.73–0.92; P < 0.001). Acetazolamide, a carbonic anhydrase inhibitor, through reduction of proximal tubular sodium reabsorption, may improve the efficiency of loop diuretics, potentially leading to faster decongestion in patients with acute decompensated heart failure. The ADVOR (Acetazolamide in Decompensated Heart Failure with Volume Overload) study randomised 519 patients with decompensated HF patients to IV acetazolamide (500 mg daily) versus placebo in addition to IV loop diuretics (at twice the oral maintenance dose) examining the role. Acetazolamide was associated with a 46% improvement in attaining the primary endpoint of absence of signs of fluid overload at 3 days (42.2% vs. 30.5%; RR 1.46, 95% CI 1.17–1.82; P < 0.001) with higher urine output and natriuresis but without an excess of acute kidney injury, hypokalaemia, or hypotension. While the importance of optimised dosing of HF treatment is well established, since HF therapies may be associated with hypotension and renal decline, the ideal rate of titration is less clear. The STRONG-HF (Safety, Tolerability and Efficacy of Rapid Optimization, Helped by NT-proBNP Testing, of Heart Failure Therapies) trial randomised 1078 patients admitted to hospital with acute HF to rapid up-titration (achieving full recommended doses within 2 weeks of discharge) versus usual care. Rapid up-titration was associated with a significantly lower rate of readmission for HF or all-cause death (15.2% vs. 23.3%; 95% CI 2.9–13.2; P = 0.0021), approximately a 10% increase in adverse events, but a similar rate for serious adverse events. IV iron has a Class IIa recommendation for patients with HF and anaemia. Most trials have used ferric carboxymaltose. IRONMAN (Intravenous ferric derisomaltose in patients with heart failure and iron deficiency in the UK) randomised 1,137 patients with chronic HF and iron deficiency (LVEF < 45%, with Transferrin saturation < 20% or ferritin < 100 µg/l) to ferric derisomaltose (which can be given as a rapid, high-dose infusion) versus usual care. At a median fgollow up of 2.7 years, ferric derisomaltose showed a trend to reduction in the primary composite endpoint of HF hospitalisation and CV death (336 vs. 411 events; RR 0.82, 95% CI 0.66–1.02; P = 0.07) and a significant reduction in HF hospitalisations. Since study outcomes may have been confounded by the COVID-19 pandemic, a pre-specified analysis censoring follow-up on September 30, 2020 was undertaken which reported a significant reduction in the primary endpoint (210 vs. 280 events; RR 0·76 [95% CI 0·58 to 1·00]; P = 0·047). Myosin inhibition using mavacamten in patients with obstructive hypertrophic cardiomyopathy was examined in the VALOR-HCM (Mavacamten in Adults With Symptomatic Obstructive HCM Who Are Eligible for Septal Reduction Therapy) trial which randomised 112 patients eligible for septal reduction therapy (SRT) to mavacamten (starting at 5 mg and titrating using LVEF and LVOT gradient) versus placebo. After 16 weeks follow-up, mavacamten was associated with marked reduction in obstructive parameters with only 17.9% still meeting guideline criteria for SRT (vs. 76.8% of placebo patients; 95% CI: 0.44–0.74; P < 0.001).
Lipoprotein[Lp] (a) is highly genetically determined and higher levels are associated with an increased risk of CV disease. Statins have minimal effect and PCSK9i only modest effect but Olpasiran, a small interfering RNA (siRNA) may enable significant Lp(a) reduction. In the OCEAN(a)-DOSE TIMI 67 trial , 281 patients with elevated Lp(a) > 150 nmol/L were randomised to 1 of 4 olpasiran doses (10 mg, 75 mg, or 225 mg every 12 weeks, or 225 mg every 24 weeks) versus placebo. By 36 weeks, the 4 doses of olpasiran were associated with placebo-adjusted percent reductions in Lp(a) concentration of 70.5%, 97.4%, 101.1%, and 100.5%, respectively, along with useful reductions in low-density lipoprotein (LDL) cholesterol and apolipoprotein B. In addition to Olpasiran, other siRNA drugs are in development including SLN360, and pelacarsen, an mRNA-based antisense oligonucleotide targeting the Lp(a) gene being studied in the 8000-patient outcomes study, Lp(a)HORIZON which will hopefully clarify if reduction of Lp(a) is of benefit . Perceived myalgia remains an important limitation for statin adherence. The Cholesterol Treatment Trialists’ Collaboration evaluated incidence of myalgia in a meta-analysis of 19 double-blind trials of statin versus placebo ( n = 123,940) and four double-blind trials of more versus less intensive statin regimen ( n = 30,724). For the 19 placebo-controlled trials, statin use was associated with a 3% increase in reported muscle pain or weakness at a median 4·3 years follow-up (27.1% vs. 26.6%; RR 1.03, CI 95% 1.01–1.06), but the excess was mainly during the first year, when statin use was associated with an absolute excess of 11 events per 1000 person-years. Similarly, a small increase in reported muscle pain or weakness was seen with higher versus lower intensity statin groups, (36.1% vs. 34.8%; RR 1.05, CI 95% 1.01–1.09). In summary, while statin therapy can cause myalgia, most (> 90%) reports of muscle symptoms by participants allocated statin therapy were not due to the statin. The FOURIER-OLE (Fourier Open-label Extension Study in Subjects With Clinically Evident Cardiovascular Disease in Selected European Countries) evaluated the long-term follow-up of the FOURIER study in 6635 patients randomised to the PCSK9 inhibitor Evolocumab versus placebo. At a median of 5 years, Evolocumab was associated with resulted in a 20% reduction in CV death, MI or stroke (HR 0.8, 95% CI 0.68–0.93; P = 0.003) with low risk of adverse events. Elevated uric acid is recognised as an independent risk factor for CV events. The ALL-HEART (Allopurinol versus usual care in UK patients with ischaemic heart disease) study randomised 5721 patients > 60 years with ischaemic heart disease but no history of gout to allopurinol (up-titrated to maximum of 600 mg) versus placebo. However, over a mean of 4.8 years follow-up, allopurinol was not associated with reduction in the primary endpoint of CV death, MI or stroke (11% vs. 11.3%; P = 0.65). The endothelin pathway has been implicated in the pathogenesis of hypertension, but is currently not targeted therapeutically, leaving this pathway unopposed with currently available drugs. The global PRECISION (Dual endothelin antagonist aprocitentan for resistant hypertension) trial randomised 730 patients with hypertension resistant to at least 3 antihypertensives to the dual endothelin receptor antagonist aprocitentan aprocitentan 12·5 mg or 25 mg versus placebo in a 1:1:1 fashion. At 4 weeks, aprocitentan was associated with met the primary endpoint with greater systolic blood pressure reduction (mean change for aprocitentan 12.5 mg of − 15.3 mmHg and for aprocitentan 25 mg of − 15.2 mmHg vs. placebo − 11.5 mg; P < 0.005 for both treatment doses). Delivering healthcare in rural environments can be challenging. In China, non-physician village doctors may initiate and titrate antihypertensive medications according to a standard protocol with supervision from primary care physicians, and undertake health coaching on home blood pressure monitoring, lifestyle changes, and medication adherence. The China Rural Hypertension Control Project randomised 33,995 patients from 326 villages to village doctor-led multifaceted intervention versus usual care . By 36 months, the intervention group reported a drop in mean systolic pressure from 157 to 126.1 mmHg, whereas the usual-care group only dropped from 155.4 mmHg to 146.7 mmHg and a significant reduction in the primary composite CV endpoint (1.98% vs. 2.85% per year; HR 0.69, CI 95% 0.63–0.76) with 33% fewer strokes ( P < 0.0001), 39% fewer cases of HF ( P = 0.005), 24% fewer CV deaths ( P = 0.0004), and 15% fewer all-cause deaths ( P = 0.009). Previous trial data suggested a protective effect for nocturnal dosing of anti-hypertensive therapies on cardiovascular events, although the trial methodology was subsequently questioned . The TIME (Treatment in Morning versus Evening) trial randomised 21,104 patients (mean age 65 years, female 43%) to evening versus morning dosing of their regular antihypertensive agent . After 5 years, the primary outcome (composite of vascular death, MI or stroke) occurred in 3.4% of the evening dosing group versus 3.7% of the morning group ( P = 0.53). There was no difference in rates of stroke between groups (1.2% vs. 1.3%, P = 0.54); however, there was a modestly higher rate of falls in the morning dosing group (22.2% vs. 21.1%, P = 0.048). This informative trial demonstrates no difference in cardiovascular outcomes with respect to timing of anti-hypertensive dosing albeit a slightly reduced risk of falls with evening dosing.
While all summarised trials have been presented at major cardiology conferences in 2022, not all trials have been published as yet in peer-reviewed journals.
This paper has highlighted and summarised the key cardiology trials that were published and presented during 2022. Many will guide clinical practice and influence guideline development. Others have shown encouraging early data which will guide future study.
|
Appendiceal Well-Differentiated Neuroendocrine Tumors: A
Single-Center Experience and New Insights into the Effective Use of
Immunohistochemistry
|
73fa4540-de52-4fad-973a-f9fef368141d
|
10101181
|
Anatomy[mh]
|
Appendectomy is the most frequently performed surgery today, which typically follows an emergency room visit with acute abdominal pain. An
appendectomy may also be performed on patients without acute appendicitis-related
symptoms, in conjunction with gynecological surgery procedures and right
hemicolectomies for cecal or right colon lesions. The incidence of appendiceal
tumors in an appendectomy specimen is low, and most of these tumors are clinically undetected. As a result, many patients diagnosed with incidental appendiceal tumors lack
clinical staging information such as size and lymph node/distant metastasis at
initial diagnosis, and the treatment decision weighs heavily on the initial
pathology report. Appendiceal well-differentiated neuroendocrine tumor accounts for more than half of
appendiceal tumors and is the most common histological type. Most are incidental, small, and negative for hormonal symptoms and have an
excellent 5-year and long-term prognosis. Major international guidelines have listed several risk factors for lymph
node metastasis. European Neuroendocrine Tumour Society (ENETS) and the North
American Neuroendocrine Tumour Society (NANETS) guidelines state tumor size as the
most crucial factor, with tumors > 2 cm at increased risk of lymph node
metastasis and should be followed by subsequent right hemicolectomy. , Other risk factors include the
extent of the tumor, pathological tumor stage, presence of lymphovascular invasion,
and tumor grade. , Given the general indolent nature and the younger demographic of appendiceal
well-differentiated neuroendocrine tumors compared to other common sites of
well-differentiated gastropancreas-intestinal neuroendocrine tumors (pancreas,
rectum, small intestine, stomach), aggressive treatment options should be limited to select patients at
high-risk. While many case reports and single-center pathology reviews have contributed to
recognizing this incidental lesion, – including the most recent
pathology review by Noor et al, the practical immunohistochemical panel based on risk assessment has not been
thoroughly examined to date. Here, we reviewed clinicopathological information among 70 consecutive appendiceal
well-differentiated neuroendocrine tumors in a single tertiary institution and aimed
to provide practical information on the utility of immunohistochemistry.
An institutional review board (University Health Network, REB 21-5299) approved this
study. Two pathologists performed a retrospective review of pathology and clinical
charts. Pathology data were retrieved from CoPath, a pathology information system,
and clinical data from EPR, an electronic medical records system. All patients
diagnosed with appendiceal well-differentiated neuroendocrine tumors from 2005 to
2019 at University Health Network were included. Patients were either treated with
appendectomy for clinically diagnosed appendicitis or as appendectomy as part of
surgery for other reasons. All patients were anonymized. Poorly-differentiated
neuroendocrine tumors and neuroendocrine carcinoma were not included in our series.
Patients with concurrent appendiceal carcinoma and goblet cell adenocarcinoma were
also excluded from our study. Patient sex, age, clinical presentation, diagnostic
imaging, operative details, surgical pathology, and clinical outcomes were
reviewed. Statistical Analysis Statistical analysis was performed using GraphPad Prism 7.00. Briefly,
comparisons of categorical data were performed using Fisher's exact test in
2 × 2 contingency tables. The metric variables such as age (years) and tumor
size (cm) were described using median and range, and between-group comparisons
were performed using the nonparametric Mann–Whitney U test. All statistical
tests were two-sided; p values below 0.05 were considered significant. * P <
0.05.
Statistical analysis was performed using GraphPad Prism 7.00. Briefly,
comparisons of categorical data were performed using Fisher's exact test in
2 × 2 contingency tables. The metric variables such as age (years) and tumor
size (cm) were described using median and range, and between-group comparisons
were performed using the nonparametric Mann–Whitney U test. All statistical
tests were two-sided; p values below 0.05 were considered significant. * P <
0.05.
The number of appendectomies performed at our institution from 2005 to 2019 was 4110,
and the proportion of appendix containing appendiceal well-differentiated
neuroendocrine tumors was 1.7%. Clinicopathological features of 70 appendiceal
well-differentiated neuroendocrine tumors are summarized in . More than half of the patients
were female (60%). Patients’ median age was 36.5 (range 5−87) years; seven were
children aged 5, 14, 15, 15, 17, 17, 17 years. Initially, 63 patients were treated
with appendectomy, and seven patients were treated with upfront right hemicolectomy
for the following reasons; ascending colon cancer in 4 patients, cecal cancer in 1
patient, Crohn’s disease in 1 patient, and recurrent diverticulitis in 1 patient.
Forty-seven patients were clinically suspected appendicitis; for eight patients, the
reason for the appendectomy was not documented and for 15 patients was other reasons
including ovarian cyst/tumor (5), diverticulitis (1), a-colon ca (4), cecal ca (1),
uterine carcinosarcoma (1), endometrial ca (1), kidney/pancreas transplant (1), and
Crohn’s disease (1). Among the 63 simple appendectomies, five patients underwent
subsequent right hemicolectomy following a well-differentiated neuroendocrine tumor
diagnosis. Diagnostic imaging was performed in 47 patients (67%), using abdominal CT in 31/70
patients (44%) and abdominal ultrasound in 16/70 patients (23%). CT showed a
thickened appendiceal wall in 15 patients; thickened appendiceal wall and
peri-appendiceal inflammation in 3; distended appendix in 3 patients;
non-significant appendiceal findings in ten patients. The ten patients with
non-significant appendiceal findings received appendectomy for other reasons
described above. Ultrasound showed thickened appendiceal wall in 15 patients;
distended appendix in one patient. The median size of the tumor was 0.5 cm (range 0.05−2.9 cm), 49 tumors measured <
1.0 cm, 14 tumors measured 1.0–2.0 cm, and four tumors > 2.0 cm. Three tumors had
no information on size. A macroscopic lesion at grossing was identified in 23
patients (33%). The macroscopic lesion was described as lumen occlusion (8, 35%),
nodule (4, 17%), thickening (2, 9%), tumor (1, 4%), and not specified (7, 30%). One
patient had concurrent low-grade appendiceal mucinous neoplasm (LAMN), which
presented as mucin material at grossing. The appendix was submitted in total for
pathological analysis in 60 patients (86%) following the discovery of
well-differentiated neuroendocrine tumors. Description of microscopic growth pattern was documented in only 18 tumors (26%);
which were described as acini (1), cords (2), glands/tubules (4), gyriform (1),
microacini (1), nest (11), ribbon (1), sheet (2), single file (2), trabecular (4)
with some tumors showing multiple growth patterns ( ). No tumor was described as a
tubular neuroendocrine tumor. Pathological tumor extent was submucosa; SM (11 tumors, 17%), muscularis propria; MP
(22 tumors, 34%), subserosa; SS/ Mesoappendix; Meso (27 tumors, 42%), visceral
peritoneum; VP (5 tumors, 8%) and no information available (5 tumors, 8%).
Lymphovascular invasion was observed in 10 tumors (14%) and perineural invasion in 7
tumors (10%). The resection margin was positive in 2 tumors (both radial margins);
tumor rupture site was observed in 3 tumors. Based on mitotic counts and MIB-1
index, most tumors were classified as WHO Grade 1 (63, 93%) and the remaining seven
tumors as WHO Grade 2. No tumor was classified as WHO Grade 3. Inflammatory change within the appendix was seen in the form of acute appendicitis
(33 patients), subacute appendicitis (3 patients), and serositis/peritonitis (13
patients). Concurrent lesions within the appendix include villous adenoma with
low-grade dysplasia (1), sessile serrated adenoma (1), Low-grade appendiceal
mucinous neoplasms (LAMN) (1), and diverticulum (1). The median number of immunohistochemical staining performed in one tumor was six
antibodies (range 0−15). Representative pictures of immunohistochemical markers are
shown in . Among
the antibodies, chromogranin A, synaptophysin, CAM5.2, and CDX2 were positive in
> 90% of tumors. At least one chromogranin A or synaptophysin was performed in 60
tumors (86%). Both markers were selected in 43 tumors, and when only one of the two
was stained (17 tumors, 24%), most pathologists chose chromogranin A (16 tumors).
CAM5.2 was stained in 36 tumors (51%), and all were diffusely positive (36/36,
100%). CDX2 was positive in all but one tumor stained for this marker (32/33 tumors,
97%). Serotonin was positive in 34/47 tumors (79%); glucagon in 9/25 tumors (36%);
PP in 8/30 tumors (27%); PYY in 17/32 tumors (53%). Based on immunohistochemistry,
18 tumors were from EC-cell origin, 12 tumors from L-cell origin, 6 tumors from both
EC and L-cell origin. 10 more tumors contained EC-cells but could not be fully
classified due to the incomplete immunohistochemical panel. MIB-1 staining was
performed in 63 out of 70 tumors (90%); < 3% 58 tumors, 3-20% 5 tumors. No tumor
had a MIB-1 index over 20%. Based on prior studies, , tumor size over 2 cm and tumor stage pT3/T4 were considered patients at high risk
for nodal metastasis, and our 70 patients were divided into high risk and low risk
based on these two criteria. Patients that did not have information on tumor size or
pathological stage were excluded at this point. As a result, 32 patients were
considered high risk; 32 were considered low-risk. Clinicopathological
characteristics based on two-tier risk classification are summarized in . Briefly, there
was no significant difference in sex distribution and age between the low and
high-risk groups. Differences in the mitotic count and MIB-1 index were also
statistically insignificant. No patient in the low-risk group received subsequent
right hemicolectomy - five out of 29 high-risk patients treated with appendectomy
initially went on to receive subsequent hemicolectomy (17%). No patient in the
low-risk group had lymphovascular/perineural invasion. Among hormonal
immunohistochemical markers, serotonin was more frequently positive in the high-risk
group (p = 0.0212); PYY positive in the low-risk group (p = 0.0060). No significant
difference was seen in the expressions of glucagon and PP. Imaging follow-up (CT: 31, ultrasound: 3, PET 1 patient) was available in 35 patients
with a median of 2 years 3 months (range 2 months – 10 years 7 months). All patients
were followed up for a median of 4 years 8 months (1 month – 16 years 2 months). One
patient died of uterine carcinosarcoma; all other 69 patients are alive without
recurrence.
This retrospective study identified 70 patients of incidental appendiceal
well-differentiated neuroendocrine tumors over 15 years. Sixty percent of our
patients were female, in keeping with past reports of slight female
predominance. , Our series affected seven children (10%). The mean age was 40
years, consistent with younger age distribution compared to other neuroendocrine
tumors of other sites. All 70 tumors were incidental, and the primary reason for the appendectomy
was clinically diagnosed appendicitis. No patient was diagnosed as appendiceal tumor
preoperatively. No patient exhibited symptoms other than acute
appendicitis-associated symptoms. Hormone-related symptoms are extremely rare in
non-metastatic appendiceal well-differentiated neuroendocrine tumors, and no patient in our series presented with them. Appendiceal well-differentiated neuroendocrine tumors arise from neuroendocrine cells
located within the lamina propria/submucosa of the appendiceal wall, and the tip of
the appendix is where most of these cells are located. , In our series, the tumor's
location was predominantly the tip or distal appendix (86%). The median size was 5 mm, and 67% of the tumors were smaller than 1.0 cm. Among four
tumors with a maximum dimension of > 2 cm, one received appendectomy with
hysterectomy for endometrial carcinoma. Despite the large tumor size, no further
treatment was considered due to comorbidity and age (> 80 years). The remaining
three tumors received subsequent right hemicolectomy following the diagnosis of
appendiceal well-differentiated neuroendocrine tumor, and none presented with
residual tumor or lymph node metastasis. Only one other tumor < 2.0 cm received
subsequent right hemicolectomy. This tumor was a G1 well-differentiated
neuroendocrine tumor in a 22-year-old male, with tumor size 0.8 cm, extending into
the mesoappendix, negative for lymphovascular invasion. The reason for subsequent
colon resection is unknown; however, we speculate that the younger age and extent of
the tumor (mesoappendix) may have prompted the procedure at the time of diagnosis
(2005). Regardless of the status of subsequent colectomy, no patient in our series
presented with recurrence/metastasis nor died of disease. Tumors > 2 cm are usually rare, comprising 6% of our series. Tumors > 2 cm have
an increased risk of metastasis, and subsequent colectomy is recommended. , Conversely, tumors < 1 cm
have a low risk of lymph node metastasis, and the demerit of surgical complications
weighs heavier than benefit. In the most recent paper by Noor et al, the study recommended a renewed nomenclature for the smallest tumors (size
< 5 mm) that did not invade subserosa/mesoappendix and suggested no synoptic is
needed for those tumors. Our study included 30 tumors of size < 5 mm, and among
them, only two tumors extended into the subserosa/mesoappendix. Only one tumor was
G2. None received subsequent hemicolectomy, and none presented with recurrence nor
died of disease. This result is similar to their study. Management of tumors of size
1-2 cm is still of debate, and Rault-Petite et al, suggested that lymphovascular
invasion and the extent of the tumor may be potential factors that prompt resection
in this group. In our study, none of the 1-2 cm sized tumors presented with LVI; therefore,
LVI was not a significant factor in determining the prognosis of this group. Based
on our findings, the simple two-tier risk classification using size (< 2 cm) and
extent of the tumor (absence of subserosa/mesoappendix extension) was
sufficient. The practical immunohistochemical panel for appendiceal well-differentiated
neuroendocrine tumors has not been established. The median of six immunostains
performed for pathological diagnosis in our series is quite large for a primarily
indolent tumor, and some stains may be omitted without negatively impacting the
patient. Immunohistochemistry for neuroendocrine tumors can be divided into five groups. The first group is markers for neuroendocrine differentiation (chromogranin A
and Synaptophysin), the antibody group most frequently ordered by the signing
pathologist. All appendiceal well-differentiated neuroendocrine tumors stained
positive for at least one of the two neuroendocrine markers in our series. While
demonstrating neuroendocrine differentiation using immunohistochemistry for routine
diagnosis of neuroendocrine tumors is considered standard and practical, some experts argue that it is not mandatory for morphologically typical
neuroendocrine tumors with an indolent behavior. Although we agree with this point, immunohistochemistry for chromogranin A
and Synaptophysin is undoubtedly reasonable and, in most cases, the minimum
requirement considering the rarity of this tumor. The second group, CAM5.2, is used to demonstrate the epithelial nature of the tumor
and is positive in almost all appendiceal well-differentiated neuroendocrine tumors, including our series where all tumors stained positive (36/36, 100%). While
the positive results are reassuring, routine staining of CAM5.2 is also redundant
for diagnosing typical appendiceal well-differentiated neuroendocrine tumors and is
usually only necessary when there is unusual morphology, including ganglion-like
cells indicating the possibility of a paraganglioma. The third group is markers for tumor proliferation, namely Ki-67/MIB-1. 92% of our
series had MIB-1 index < 3%. The remaining tumors had MIB-1 index 3-20%, and none
were > 20%. Most appendiceal well-differentiated neuroendocrine tumors seem to be
MIB-1 index < 3% or G1, and immunohistochemistry for MIB-1 may not even be
necessary for small tumors, although certainly reasonable as some tumors may show
unexpected higher MIB-1 index despite low mitotic counts. The fourth group is markers for the site of origin. CDX2 is useful in determining
midgut origin and was positive in all but one tumor stained for this marker (32/33 tumors)
in our series. While this marker is helpful in patients presenting with metastatic
disease to determine the site of origin, routine CDX2 is unnecessary in most tumors
as negative CDX2 does not provide additional information about the tumor. The fifth group is hormonal markers. Several hormonal markers were stained in our
study, including serotonin, glucagon, somatostatin, PP, and PYY. While staining for
hormonal markers does not provide additional clinical information for most
non-functioning appendiceal well-differentiated neuroendocrine tumors, our retrospective study found novel findings related to some of the markers.
In our study, when the patients were divided into two groups based on the risk of
lymph node metastasis, serotonin was more positive in the high-risk group
(p = 0.0212), while PYY was more positive in the low-risk group (p = 0.0060).
Therefore, immunohistochemical staining for serotonin may be applied in the
high-risk appendiceal well-differentiated neuroendocrine tumors; nevertheless, it
may not be clinically relevant in the absence of hormonal symptoms. Also, hormonal
markers may be utilized to distinguish EC-cell and L-cell neuroendocrine tumors.
Based on available immunohistochemical results, 12 cases were classified as L-cell
neuroendocrine tumors (serotonin-negative; glucagon or PP or PYY-positive).
According to our study's limited morphological data, all four tumors with a
characteristic trabecular growth pattern were classified as L-cell tumors using
immunohistochemistry. A detailed morphological examination may aid in the
identification of the less common L-cell tumors; nevertheless, hormonal markers are
not mandatory in appendiceal well-differentiated neuroendocrine tumors, as it lacks
significance in prognostic value. Our study has its limitations; most of all, the retrospective study design. This
design has resulted in the variation of immunohistochemical panels between patients,
as the immunohistochemistry has been ordered at the discretion of different
pathologists at different time periods under different guidelines and available
antibodies. Also, this study was a single-tertiary center study, which may not truly
reflect the incidence of appendiceal well-differentiated neuroendocrine tumors as
many appendectomies are performed in community hospitals. Also, because of its
indolent and incidental presentation, the patient's age at diagnosis may not
represent the actual occurrence of this lesion. In conclusion, appendiceal well-differentiated neuroendocrine tumors are incidental
findings in an appendectomy specimen with an excellent prognosis. Typical
appendiceal well-differentiated neuroendocrine tumors occur in younger (< 40
years) females, are small (< 1 cm), G1, with no lymph node and distant
metastasis. Our study shows evidence that the comprehensive immunohistochemical
panel used in other neuroendocrine tumors may be excessive for the clinically and
pathologically typical appendiceal well-differentiated neuroendocrine tumors.
Immunohistochemistry should be chosen carefully at the pathologist's discretion, and
morphological examination of tumor size and extent remains the most critical
factor.
|
Evidence-based consensus guidelines for the management of catatonia:
Recommendations from the British Association for Psychopharmacology
|
b4978449-24e4-48d2-a4ab-8d0ad1d6b3f3
|
10101189
|
Pharmacology[mh]
|
Introduction 2 Guideline rationale 2 Guideline method 2 Strength of evidence and recommendations 3 Background 3 History 3 Definition 3 Aetiology 5 Catatonia due to a medical condition 5 Catatonia due to another psychiatric disorder 6 Clinical features 6 Descriptive epidemiology 8 Clinical assessment 8 History and physical examination 8 Rating instruments 9 Investigations 10 Challenge tests 12 Lorazepam and other benzodiazepines 12 Zolpidem 13 Other drugs 13 Differential diagnosis 13 Treatment 16 General approach 16 First-line treatment 16 Non-response 17 Underlying condition 17 Complications 17 GABA-ergic pharmacotherapies 17 Electroconvulsive therapy 18 Other therapies 19 NMDA receptor antagonists 19 Dopamine precursors, agonists and reuptake inhibitors 20 Dopamine receptor antagonists and partial agonists 20 Anticonvulsants 21 Anticholinergic agents 21 Miscellaneous treatments 21 Repetitive transcranial magnetic stimulation and transcranial direct-current stimulation as alternatives to ECT 21 Subtypes of catatonia and related conditions 21 Periodic catatonia 21 Malignant catatonia 22 Neuroleptic malignant syndrome 23 Antipsychotic-induced catatonia 25 Considerations in special groups and situations 25 Children and adolescents 25 Older adults 26 The perinatal period 26 The reproductive safety of lorazepam in the perinatal period 26 The use of ECT in the perinatal period 27 Autism spectrum disorder 28 Medical conditions 28 Considerations in kidney disease 28 Considerations in liver disease 28 Considerations in lung disease 29 Research priorities 29 Acknowledgement 30 Declaration of conflicting interests 30 Funding 30 Supplemental material 30 References 30 Guideline rationale Catatonia is a severe neuropsychiatric disorder affecting movement, speech and complex behaviour, often involving autonomic and affective disturbances. It has been associated with excess morbidity and, sometimes, mortality compared to other serious mental illnesses . For much of the 20th century, catatonia was considered a subtype of schizophrenia, but, in recent decades, emerging evidence has shown that catatonia can occur in a range of psychiatric, neurological and general medical conditions . This is now reflected in both the International Classification of Diseases, Eleventh Edition ( ICD-11 ) and the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, Text Revision ( DSM-5-TR ), which acknowledge the existence of catatonia in a range of conditions. However, recognition of catatonia is often poor , and knowledge about the condition and its distinctive treatments is frequently limited among clinicians . There are no national UK guidelines that adequately cover the management of catatonia. The only UK guidance that mentions catatonia is the 2003 National Institute for Health and Care Excellence (NICE) Technology Appraisal (TA59) on the use of electroconvulsive therapy (ECT), which recognises catatonia as an indication for ECT, but there is no consideration of pharmacological treatment for catatonia . From an international perspective, the European Association of Psychosomatic Medicine and the US Academy of Consultation-Liaison Psychiatry have produced guidelines for the management of the subpopulation of patients with catatonia that occurs in medically ill patients. The schizophrenia guidelines from the World Federation of Societies of Biological Psychiatry, the American Psychiatric Association (APA) and the German Association for Psychiatry, Psychotherapy and Psychosomatics briefly mention catatonia and suggest treatment with benzodiazepines, glutamate antagonists (amantadine and memantine) or ECT . There is a clear gap in the literature for a multidisciplinary consensus guideline that comprehensively reviews the current evidence and offers treatment recommendations. Guideline method To address this need for a guideline, the British Association for Psychopharmacology (BAP) convened a group of experts with representation from general adult psychiatry, neuropsychiatry, child and adolescent psychiatry, liaison (consultation-liaison) psychiatry, perinatal psychiatry, autoimmune neurology, movement disorder neurology, pharmacy and primary care. Group members spanned the UK, USA, Canada, India and Germany, and were a mixture of disease experts and those with expertise in psychopharmacology, neuroimaging, epidemiology and clinical trials. There was patient representation on the group from its inception. A virtual meeting was convened in June 2022, where group members presented proposals for separate sections of the guideline, which were discussed by the overall group. Following the meeting, certain group members drafted sections of the guideline, which were edited and synthesised into a first draft. This draft was then disseminated to all authors for further amendments before a second draft was made for further review. The recommendations are summarised in an algorithm in . A list of the recommendations apart from the rest of the manuscript is provided in Supplemental Material 1 . Supplemental Material 2 provides a plain language summary of the guidelines for patients and carers. Example slides, which may be used for presentations of the guidelines, are available in Supplemental Material 3 . Strength of evidence and recommendations To assess the strength of evidence and recommendations, the guideline group adopted the schema developed by . This system provides categories of evidence for the purposes of assessing causal relationships as well as a classification of the strength of recommendations. To grade the strength of evidence for non-causal relationships, we used the classification employed for the British Association for Pharmacology guidelines for the pharmacological treatment of schizophrenia, as shown in . Catatonia is a severe neuropsychiatric disorder affecting movement, speech and complex behaviour, often involving autonomic and affective disturbances. It has been associated with excess morbidity and, sometimes, mortality compared to other serious mental illnesses . For much of the 20th century, catatonia was considered a subtype of schizophrenia, but, in recent decades, emerging evidence has shown that catatonia can occur in a range of psychiatric, neurological and general medical conditions . This is now reflected in both the International Classification of Diseases, Eleventh Edition ( ICD-11 ) and the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, Text Revision ( DSM-5-TR ), which acknowledge the existence of catatonia in a range of conditions. However, recognition of catatonia is often poor , and knowledge about the condition and its distinctive treatments is frequently limited among clinicians . There are no national UK guidelines that adequately cover the management of catatonia. The only UK guidance that mentions catatonia is the 2003 National Institute for Health and Care Excellence (NICE) Technology Appraisal (TA59) on the use of electroconvulsive therapy (ECT), which recognises catatonia as an indication for ECT, but there is no consideration of pharmacological treatment for catatonia . From an international perspective, the European Association of Psychosomatic Medicine and the US Academy of Consultation-Liaison Psychiatry have produced guidelines for the management of the subpopulation of patients with catatonia that occurs in medically ill patients. The schizophrenia guidelines from the World Federation of Societies of Biological Psychiatry, the American Psychiatric Association (APA) and the German Association for Psychiatry, Psychotherapy and Psychosomatics briefly mention catatonia and suggest treatment with benzodiazepines, glutamate antagonists (amantadine and memantine) or ECT . There is a clear gap in the literature for a multidisciplinary consensus guideline that comprehensively reviews the current evidence and offers treatment recommendations. To address this need for a guideline, the British Association for Psychopharmacology (BAP) convened a group of experts with representation from general adult psychiatry, neuropsychiatry, child and adolescent psychiatry, liaison (consultation-liaison) psychiatry, perinatal psychiatry, autoimmune neurology, movement disorder neurology, pharmacy and primary care. Group members spanned the UK, USA, Canada, India and Germany, and were a mixture of disease experts and those with expertise in psychopharmacology, neuroimaging, epidemiology and clinical trials. There was patient representation on the group from its inception. A virtual meeting was convened in June 2022, where group members presented proposals for separate sections of the guideline, which were discussed by the overall group. Following the meeting, certain group members drafted sections of the guideline, which were edited and synthesised into a first draft. This draft was then disseminated to all authors for further amendments before a second draft was made for further review. The recommendations are summarised in an algorithm in . A list of the recommendations apart from the rest of the manuscript is provided in Supplemental Material 1 . Supplemental Material 2 provides a plain language summary of the guidelines for patients and carers. Example slides, which may be used for presentations of the guidelines, are available in Supplemental Material 3 . To assess the strength of evidence and recommendations, the guideline group adopted the schema developed by . This system provides categories of evidence for the purposes of assessing causal relationships as well as a classification of the strength of recommendations. To grade the strength of evidence for non-causal relationships, we used the classification employed for the British Association for Pharmacology guidelines for the pharmacological treatment of schizophrenia, as shown in . History Descriptions of what was likely catatonia date back to antiquity . However, major interest in motor manifestations of psychiatric disorders began only in the mid-19th century. At that time, Griesinger drew a distinction between abnormal movements that were the product of agency and those that were unconscious processes . The term ‘catatonia’ was coined by Karl Ludwig Kahlbaum in 1874, who described an early phase of alternation between excitement and stupor, followed by a phase of qualitatively abnormal movements , though other 19th-century authors had described similar phenomena . By the end of the 19th century, Kraepelin’s diagnostic classifications of psychiatric disorders incorporated catatonia into an enlarged concept of dementia praecox where motor signs were the result of psychological processes , and therefore catatonia was subsumed under the diagnosis of schizophrenia by Eugen Bleuler. This differed from Kahlbaum, who had conceived of catatonia as an independent disorder with motor, behavioural and affective signs as primary manifestations of the disorder . Moreover, Kahlbaum emphasised the strong occurrence of affective symptoms in combination with motor and behavioural abnormalities . Catatonia as a subtype of schizophrenia went on to be the conceptual model used by earlier editions of the ICD and DSM . However, two papers published in 1976 challenged this assumption, arguing that catatonia appears in a range of psychiatric and medical disorders, not exclusively (or even mainly) in schizophrenia . The current major diagnostic manuals ( ICD-11 and DSM-5-TR ) have since endorsed a broader concept of catatonia and permit diagnosis in the context of other mental and physical disorders, as well as providing an ‘unspecified’ category. Definition Unlike many psychiatric disorders, where there is an emphasis on symptoms, the clinical features of catatonia largely consist of observed or elicited signs. More than 50 such signs have been identified . These signs cover focal motor activity (e.g. catalepsy, posturing, mannerisms, stereotypies, grimacing and echopraxia), generalised motor activity (stupor and agitation), speech (mutism, verbigeration and echolalia), affect (affective blunting, anxiety and ambivalence), complex behaviour (negativism, reduced oral intake and withdrawal) and autonomic activity (tachycardia and hypertension). They concern failures in initiation of activity (stupor, mutism and reduced oral intake) and in cessation of activity (perseveration, catalepsy and posturing). With such a wide range of clinical signs, there is a need to identify which may be specific to catatonia. Those that have little specificity (e.g. tachycardia and anxiety) are unlikely to be very useful diagnostically, although they may be helpful in gauging severity and treatment response. In terms of sensitivity, studies have failed to identify any catatonic feature that is invariably present in catatonia , which is the case for many psychiatric disorders. If there are no clinical signs that are pathognomonic of catatonia, it is reasonable to use a combination of clinical signs. The question then is how many signs should be used. Between two and four signs have been proposed as an appropriate threshold . One important study had an a priori threshold of two catatonic signs and found that there was a high response rate to a lorazepam challenge, but ultimately all included patients had at least three signs . Others propose the presence of at least one motor, one behavioural and one affective sign . Such a definition of catatonia conforms to the psychomotor concept introduced by , and it does not regard any of the catatonic signs as pathognomonic for catatonia . Without a gold-standard biomarker, there can only be moderate confidence around the validity of diagnostic criteria. There is also a certain circularity to defining a syndrome based on response to benzodiazepines, then testing the same drugs as treatments. However, benzodiazepine response can perhaps be considered as a surrogate marker for some form of as yet not fully characterised pathophysiological process, although the response to benzodiazepines is not universal. One of the more compelling pieces of evidence for a requirement of three catatonic signs derives from a cluster analysis of potential catatonic features, which distinguished patients with and without catatonia. Using this as a gold standard, the authors ascertained that a combination of at least three signs best fitted the cluster-derived catatonic syndrome . A threshold of four catatonic signs is highly specific but may miss some cases and thus have poorer sensitivity . Definitions of different forms of catatonia are shown in . Recommendation on the definition of catatonia Catatonia should be diagnosed based on the presence of three or more catatonic signs, as in DSM-5-TR or ICD-11 . (B) Aetiology Catatonic signs are not uncommon and can occur in many psychiatric and medical disorders. The lingering nosological legacy of catatonic schizophrenia, whereby catatonia necessarily implied schizophrenia, has been laid to rest by ICD-11 and DSM-5-TR , where catatonia can now be diagnosed in the context of many different conditions . The terms ‘organic’ or ‘secondary’ catatonia have been used in the past to signify underlying medical or neurological aetiological conditions . However, the distinction between ‘organic’ and ‘functional’ is perhaps best avoided due to their differing connotations in disparate clinical settings. Our consideration of the medical and psychiatric conditions underlying catatonia is largely based on clinical judgements in the published literature about what is likely to have led to catatonia, rather than on robust epidemiological associations. Often there is a close temporal relationship and sometimes a concomitant response to treatment. However, the literature largely rests on heterogeneous case reports and series, sometimes lacking standardised assessment. Many reports do not fulfil the Bradford Hill criteria for causation . Moreover, as prolonged or severe catatonia can, in turn, result in medical complications, it can be difficult to elucidate the cause-and-effect dilemma in some cases. However, it is hard to design studies to test for aetiological links, as there is under-detection and a lack of comprehensive investigations in many cases with catatonia. This may lead to a publication bias at both ends, with many cases going under-reported but the more dramatic ones finding favour for publication. Catatonia due to a medical condition There is evidence to suggest that in about 20% of patients with catatonia in unselected populations and more than 50% of patients with catatonia in acute medical and surgical settings there is an associated medical disorder that may be contributing to their presentation; this percentage rises to almost 80% in older patients . These figures exclude catatonic signs seen in neuroleptic malignant syndrome (NMS). There are several clinical features that suggest a higher likelihood of ‘medical catatonia’, and these include comorbid delirium, clinically significant autonomic disturbances, catatonic excitement, presence of the grasp reflex, pneumonia, known history of a neurological condition and history of seizures . describes the common underlying medical disorders associated with catatonia in a systematic review of 11 studies, with inflammatory brain disorders contributing 28.8% out of a total of 302 patients. These disorders include encephalitis (most common) and systemic lupus erythematosus (SLE), followed by neural injury (19.2%; with vascular and degenerative conditions the most common causes of injury), toxins or medications (11.6%; such as benzodiazepine withdrawal), structural brain pathology (9.6%; such as space occupying lesions) and epilepsy (9.3%), with miscellaneous disorders and states (such as hyponatremia, postpartum, renal failure and sepsis) contributing 19.5%. Unlike delirium, where metabolic and systemic disorders predominate, 68.9% of medical disorders underlying catatonia were secondary to a central nervous system (CNS)-specific disease . The medical disorders underlying catatonia listed in this guideline are not a comprehensive list, as such a compilation is out of the scope of this guidance. In , we provide a selection of the most important underlying disorders. In terms of focal neurological lesions in catatonia, there are case reports of catatonia associated with lesions to the frontal, parietal and temporal lobes, basal ganglia, diencephalon and cerebellum and lesions around the third ventricle. However, larger studies have found that most of the structural neuroimaging abnormalities in catatonia consist of generalised atrophy or non-specific white matter abnormalities . In terms of functional neuroimaging, decreased activation in the contralateral motor cortex, decreased regional cerebral blood flow (r-CBF) in right fronto-parietal cortex and decreased density of γ-aminobutyric acid (GABA)-A receptors in the left sensorimotor cortex and right parietal cortex have all been found. Catatonia due to another psychiatric disorder In DSM-5-TR , catatonic signs represent a specifier for autism spectrum disorder, mood disorders (major depressive disorder, bipolar I disorder and bipolar II disorder), psychotic disorders (schizophrenia, schizoaffective disorder, schizophreniform disorder, brief psychotic disorder and substance-induced psychotic disorder) and another medical condition. The DSM-5-TR also includes a category for unspecified catatonia . The DSM-IV Handbook of Differential Diagnosis provided a helpful hierarchy of diagnosis for catatonia, with medical aetiology first, followed by antipsychotic-induced catatonia, then substance intoxication or withdrawal, and then bipolar disorder and major depression, and then other psychiatric disorders including schizophrenia. This remains a useful hierarchy for clinical use. Among primary psychiatric disorders, observational studies have reported catatonia in association with depression, mania, schizophrenia, autism spectrum disorder, anxiety disorders and postpartum psychosis . Other psychiatric disorders with evidence from case reports or case series include obsessive-compulsive disorder and post-traumatic stress disorder ( ; Dhossche et al., 2010b; ; ). Clinical features Given that catatonic signs can fluctuate over time, catatonic signs should be examined both cross-sectionally and longitudinally using the diagnostic systems ICD-11 and DSM-5-TR or one of the available clinical rating scales (for details, see sections ‘History and physical examination’ and ‘Rating instruments’). The characteristic motor signs include mannerisms, stereotypy, festination, athetotic movements, dyskinesias, Gegenhalten , posturing, catalepsy, waxy flexibility ( flexibilitas cerea ), rigidity, muscular hypotonus, sudden muscular tone alterations and akinesia. The characteristic affective features include compulsive emotions, emotional lability, impulsivity, aggression, excitement, affect-related behaviour, flat affect, affective latency, anxiety, ambivalence, staring and agitation. The cognitive-behavioural catatonic features include grimacing, verbigeration, perseveration, aprosodic speech, abnormal speech, automatic obedience, echolalia/echopraxia, Mitgehen / Mitmachen , compulsive behaviour, negativism, autism/withdrawal, mutism, stupor, loss of initiative and vegetative abnormalities. From a longitudinal perspective, catatonic signs often fluctuate and patients can show different forms of catatonia at different points in their illness. The courses and outcomes of catatonia vary. A rare form of catatonia is ‘periodic’ catatonia (see for overview of different forms of catatonia), characterised by a cyclic pattern of akinesia (stupor) and hyperkinesia (excitement), with intervals of remission (see section ‘Periodic catatonia’ for more details). Acute catatonic states can be rapidly relieved due to early therapy or may become a residual state. The clinical profile of catatonia observed in patients with chronic psychotic disorders appears to be different from that seen in acutely emerging mostly stuporous catatonic states (see e.g. . Descriptive epidemiology Many estimates of catatonia prevalence in various populations of patients seen in mental health services are available. provided a synthesis of these results and the headline figure is that about 9% (95% confidence interval (CI): 6.9–11.7%) of mental health patients have features of catatonia. However, there are some important considerations to keep in mind. First, there is considerable variation across the studies that is not explained by sampling variation alone. For example, the larger studies reported much lower prevalence. For studies where n was greater than 1000, the prevalence was 2.3% (95% CI: 1.3–3.9%). Some of these studies also estimated prevalence within a series of patients with schizophrenia, which might be expected to have a higher prevalence than in individuals with some other mental disorders. There did not appear to be a consistent relationship between catatonia prevalence and whether the study was conducted in high-income or low- and middle-income countries. Second, many of these studies relied upon clinical diagnoses. It is probable that catatonia is under-diagnosed clinically and the smaller studies were far more likely to have used a systematic means of identifying catatonia, thereby explaining higher reported prevalence. estimated an incidence of catatonia in the general population, in London, UK, finding that catatonia occurred in 10.6 (95% CI: 10.0–11.1) per 100,000 person-years, but this also relied upon the mention of catatonia in the healthcare notes. In a large recent study in US non-federal general hospitals, a discharge diagnosis with an ICD-10 catatonia code occurred in 0.05% of hospital admissions . Some reports indicate a temporal decline in the diagnosis of catatonia in routinely collected data. described a drop in incidence of catatonic schizophrenia between the 1950s and 1970s in Finnish registry data, especially in the age group of 25–40 years. However, it is possible that this apparent decline is a result of changes in diagnostic practice rather than a true change in incidence. reported that the apparent decline in catatonia between 1980 and 2000 in routine diagnostic data from the Netherlands could be explained by a change in diagnostic habits. A sample of patients with more detailed clinical data illustrated a high frequency of catatonic presentations from 2001 to 2003. reported an increase in incidence between 2007 and 2016. The varying interest in catatonia and changes in diagnostic practice over time make the interpretation of time trend data very difficult. Several studies conducted in Western nations have found that catatonia was more common among individuals from ethnic minorities , often by a large margin. Descriptions of what was likely catatonia date back to antiquity . However, major interest in motor manifestations of psychiatric disorders began only in the mid-19th century. At that time, Griesinger drew a distinction between abnormal movements that were the product of agency and those that were unconscious processes . The term ‘catatonia’ was coined by Karl Ludwig Kahlbaum in 1874, who described an early phase of alternation between excitement and stupor, followed by a phase of qualitatively abnormal movements , though other 19th-century authors had described similar phenomena . By the end of the 19th century, Kraepelin’s diagnostic classifications of psychiatric disorders incorporated catatonia into an enlarged concept of dementia praecox where motor signs were the result of psychological processes , and therefore catatonia was subsumed under the diagnosis of schizophrenia by Eugen Bleuler. This differed from Kahlbaum, who had conceived of catatonia as an independent disorder with motor, behavioural and affective signs as primary manifestations of the disorder . Moreover, Kahlbaum emphasised the strong occurrence of affective symptoms in combination with motor and behavioural abnormalities . Catatonia as a subtype of schizophrenia went on to be the conceptual model used by earlier editions of the ICD and DSM . However, two papers published in 1976 challenged this assumption, arguing that catatonia appears in a range of psychiatric and medical disorders, not exclusively (or even mainly) in schizophrenia . The current major diagnostic manuals ( ICD-11 and DSM-5-TR ) have since endorsed a broader concept of catatonia and permit diagnosis in the context of other mental and physical disorders, as well as providing an ‘unspecified’ category. Unlike many psychiatric disorders, where there is an emphasis on symptoms, the clinical features of catatonia largely consist of observed or elicited signs. More than 50 such signs have been identified . These signs cover focal motor activity (e.g. catalepsy, posturing, mannerisms, stereotypies, grimacing and echopraxia), generalised motor activity (stupor and agitation), speech (mutism, verbigeration and echolalia), affect (affective blunting, anxiety and ambivalence), complex behaviour (negativism, reduced oral intake and withdrawal) and autonomic activity (tachycardia and hypertension). They concern failures in initiation of activity (stupor, mutism and reduced oral intake) and in cessation of activity (perseveration, catalepsy and posturing). With such a wide range of clinical signs, there is a need to identify which may be specific to catatonia. Those that have little specificity (e.g. tachycardia and anxiety) are unlikely to be very useful diagnostically, although they may be helpful in gauging severity and treatment response. In terms of sensitivity, studies have failed to identify any catatonic feature that is invariably present in catatonia , which is the case for many psychiatric disorders. If there are no clinical signs that are pathognomonic of catatonia, it is reasonable to use a combination of clinical signs. The question then is how many signs should be used. Between two and four signs have been proposed as an appropriate threshold . One important study had an a priori threshold of two catatonic signs and found that there was a high response rate to a lorazepam challenge, but ultimately all included patients had at least three signs . Others propose the presence of at least one motor, one behavioural and one affective sign . Such a definition of catatonia conforms to the psychomotor concept introduced by , and it does not regard any of the catatonic signs as pathognomonic for catatonia . Without a gold-standard biomarker, there can only be moderate confidence around the validity of diagnostic criteria. There is also a certain circularity to defining a syndrome based on response to benzodiazepines, then testing the same drugs as treatments. However, benzodiazepine response can perhaps be considered as a surrogate marker for some form of as yet not fully characterised pathophysiological process, although the response to benzodiazepines is not universal. One of the more compelling pieces of evidence for a requirement of three catatonic signs derives from a cluster analysis of potential catatonic features, which distinguished patients with and without catatonia. Using this as a gold standard, the authors ascertained that a combination of at least three signs best fitted the cluster-derived catatonic syndrome . A threshold of four catatonic signs is highly specific but may miss some cases and thus have poorer sensitivity . Definitions of different forms of catatonia are shown in . Recommendation on the definition of catatonia Catatonia should be diagnosed based on the presence of three or more catatonic signs, as in DSM-5-TR or ICD-11 . (B) Catatonic signs are not uncommon and can occur in many psychiatric and medical disorders. The lingering nosological legacy of catatonic schizophrenia, whereby catatonia necessarily implied schizophrenia, has been laid to rest by ICD-11 and DSM-5-TR , where catatonia can now be diagnosed in the context of many different conditions . The terms ‘organic’ or ‘secondary’ catatonia have been used in the past to signify underlying medical or neurological aetiological conditions . However, the distinction between ‘organic’ and ‘functional’ is perhaps best avoided due to their differing connotations in disparate clinical settings. Our consideration of the medical and psychiatric conditions underlying catatonia is largely based on clinical judgements in the published literature about what is likely to have led to catatonia, rather than on robust epidemiological associations. Often there is a close temporal relationship and sometimes a concomitant response to treatment. However, the literature largely rests on heterogeneous case reports and series, sometimes lacking standardised assessment. Many reports do not fulfil the Bradford Hill criteria for causation . Moreover, as prolonged or severe catatonia can, in turn, result in medical complications, it can be difficult to elucidate the cause-and-effect dilemma in some cases. However, it is hard to design studies to test for aetiological links, as there is under-detection and a lack of comprehensive investigations in many cases with catatonia. This may lead to a publication bias at both ends, with many cases going under-reported but the more dramatic ones finding favour for publication. Catatonia due to a medical condition There is evidence to suggest that in about 20% of patients with catatonia in unselected populations and more than 50% of patients with catatonia in acute medical and surgical settings there is an associated medical disorder that may be contributing to their presentation; this percentage rises to almost 80% in older patients . These figures exclude catatonic signs seen in neuroleptic malignant syndrome (NMS). There are several clinical features that suggest a higher likelihood of ‘medical catatonia’, and these include comorbid delirium, clinically significant autonomic disturbances, catatonic excitement, presence of the grasp reflex, pneumonia, known history of a neurological condition and history of seizures . describes the common underlying medical disorders associated with catatonia in a systematic review of 11 studies, with inflammatory brain disorders contributing 28.8% out of a total of 302 patients. These disorders include encephalitis (most common) and systemic lupus erythematosus (SLE), followed by neural injury (19.2%; with vascular and degenerative conditions the most common causes of injury), toxins or medications (11.6%; such as benzodiazepine withdrawal), structural brain pathology (9.6%; such as space occupying lesions) and epilepsy (9.3%), with miscellaneous disorders and states (such as hyponatremia, postpartum, renal failure and sepsis) contributing 19.5%. Unlike delirium, where metabolic and systemic disorders predominate, 68.9% of medical disorders underlying catatonia were secondary to a central nervous system (CNS)-specific disease . The medical disorders underlying catatonia listed in this guideline are not a comprehensive list, as such a compilation is out of the scope of this guidance. In , we provide a selection of the most important underlying disorders. In terms of focal neurological lesions in catatonia, there are case reports of catatonia associated with lesions to the frontal, parietal and temporal lobes, basal ganglia, diencephalon and cerebellum and lesions around the third ventricle. However, larger studies have found that most of the structural neuroimaging abnormalities in catatonia consist of generalised atrophy or non-specific white matter abnormalities . In terms of functional neuroimaging, decreased activation in the contralateral motor cortex, decreased regional cerebral blood flow (r-CBF) in right fronto-parietal cortex and decreased density of γ-aminobutyric acid (GABA)-A receptors in the left sensorimotor cortex and right parietal cortex have all been found. Catatonia due to another psychiatric disorder In DSM-5-TR , catatonic signs represent a specifier for autism spectrum disorder, mood disorders (major depressive disorder, bipolar I disorder and bipolar II disorder), psychotic disorders (schizophrenia, schizoaffective disorder, schizophreniform disorder, brief psychotic disorder and substance-induced psychotic disorder) and another medical condition. The DSM-5-TR also includes a category for unspecified catatonia . The DSM-IV Handbook of Differential Diagnosis provided a helpful hierarchy of diagnosis for catatonia, with medical aetiology first, followed by antipsychotic-induced catatonia, then substance intoxication or withdrawal, and then bipolar disorder and major depression, and then other psychiatric disorders including schizophrenia. This remains a useful hierarchy for clinical use. Among primary psychiatric disorders, observational studies have reported catatonia in association with depression, mania, schizophrenia, autism spectrum disorder, anxiety disorders and postpartum psychosis . Other psychiatric disorders with evidence from case reports or case series include obsessive-compulsive disorder and post-traumatic stress disorder ( ; Dhossche et al., 2010b; ; ). There is evidence to suggest that in about 20% of patients with catatonia in unselected populations and more than 50% of patients with catatonia in acute medical and surgical settings there is an associated medical disorder that may be contributing to their presentation; this percentage rises to almost 80% in older patients . These figures exclude catatonic signs seen in neuroleptic malignant syndrome (NMS). There are several clinical features that suggest a higher likelihood of ‘medical catatonia’, and these include comorbid delirium, clinically significant autonomic disturbances, catatonic excitement, presence of the grasp reflex, pneumonia, known history of a neurological condition and history of seizures . describes the common underlying medical disorders associated with catatonia in a systematic review of 11 studies, with inflammatory brain disorders contributing 28.8% out of a total of 302 patients. These disorders include encephalitis (most common) and systemic lupus erythematosus (SLE), followed by neural injury (19.2%; with vascular and degenerative conditions the most common causes of injury), toxins or medications (11.6%; such as benzodiazepine withdrawal), structural brain pathology (9.6%; such as space occupying lesions) and epilepsy (9.3%), with miscellaneous disorders and states (such as hyponatremia, postpartum, renal failure and sepsis) contributing 19.5%. Unlike delirium, where metabolic and systemic disorders predominate, 68.9% of medical disorders underlying catatonia were secondary to a central nervous system (CNS)-specific disease . The medical disorders underlying catatonia listed in this guideline are not a comprehensive list, as such a compilation is out of the scope of this guidance. In , we provide a selection of the most important underlying disorders. In terms of focal neurological lesions in catatonia, there are case reports of catatonia associated with lesions to the frontal, parietal and temporal lobes, basal ganglia, diencephalon and cerebellum and lesions around the third ventricle. However, larger studies have found that most of the structural neuroimaging abnormalities in catatonia consist of generalised atrophy or non-specific white matter abnormalities . In terms of functional neuroimaging, decreased activation in the contralateral motor cortex, decreased regional cerebral blood flow (r-CBF) in right fronto-parietal cortex and decreased density of γ-aminobutyric acid (GABA)-A receptors in the left sensorimotor cortex and right parietal cortex have all been found. In DSM-5-TR , catatonic signs represent a specifier for autism spectrum disorder, mood disorders (major depressive disorder, bipolar I disorder and bipolar II disorder), psychotic disorders (schizophrenia, schizoaffective disorder, schizophreniform disorder, brief psychotic disorder and substance-induced psychotic disorder) and another medical condition. The DSM-5-TR also includes a category for unspecified catatonia . The DSM-IV Handbook of Differential Diagnosis provided a helpful hierarchy of diagnosis for catatonia, with medical aetiology first, followed by antipsychotic-induced catatonia, then substance intoxication or withdrawal, and then bipolar disorder and major depression, and then other psychiatric disorders including schizophrenia. This remains a useful hierarchy for clinical use. Among primary psychiatric disorders, observational studies have reported catatonia in association with depression, mania, schizophrenia, autism spectrum disorder, anxiety disorders and postpartum psychosis . Other psychiatric disorders with evidence from case reports or case series include obsessive-compulsive disorder and post-traumatic stress disorder ( ; Dhossche et al., 2010b; ; ). Given that catatonic signs can fluctuate over time, catatonic signs should be examined both cross-sectionally and longitudinally using the diagnostic systems ICD-11 and DSM-5-TR or one of the available clinical rating scales (for details, see sections ‘History and physical examination’ and ‘Rating instruments’). The characteristic motor signs include mannerisms, stereotypy, festination, athetotic movements, dyskinesias, Gegenhalten , posturing, catalepsy, waxy flexibility ( flexibilitas cerea ), rigidity, muscular hypotonus, sudden muscular tone alterations and akinesia. The characteristic affective features include compulsive emotions, emotional lability, impulsivity, aggression, excitement, affect-related behaviour, flat affect, affective latency, anxiety, ambivalence, staring and agitation. The cognitive-behavioural catatonic features include grimacing, verbigeration, perseveration, aprosodic speech, abnormal speech, automatic obedience, echolalia/echopraxia, Mitgehen / Mitmachen , compulsive behaviour, negativism, autism/withdrawal, mutism, stupor, loss of initiative and vegetative abnormalities. From a longitudinal perspective, catatonic signs often fluctuate and patients can show different forms of catatonia at different points in their illness. The courses and outcomes of catatonia vary. A rare form of catatonia is ‘periodic’ catatonia (see for overview of different forms of catatonia), characterised by a cyclic pattern of akinesia (stupor) and hyperkinesia (excitement), with intervals of remission (see section ‘Periodic catatonia’ for more details). Acute catatonic states can be rapidly relieved due to early therapy or may become a residual state. The clinical profile of catatonia observed in patients with chronic psychotic disorders appears to be different from that seen in acutely emerging mostly stuporous catatonic states (see e.g. . Many estimates of catatonia prevalence in various populations of patients seen in mental health services are available. provided a synthesis of these results and the headline figure is that about 9% (95% confidence interval (CI): 6.9–11.7%) of mental health patients have features of catatonia. However, there are some important considerations to keep in mind. First, there is considerable variation across the studies that is not explained by sampling variation alone. For example, the larger studies reported much lower prevalence. For studies where n was greater than 1000, the prevalence was 2.3% (95% CI: 1.3–3.9%). Some of these studies also estimated prevalence within a series of patients with schizophrenia, which might be expected to have a higher prevalence than in individuals with some other mental disorders. There did not appear to be a consistent relationship between catatonia prevalence and whether the study was conducted in high-income or low- and middle-income countries. Second, many of these studies relied upon clinical diagnoses. It is probable that catatonia is under-diagnosed clinically and the smaller studies were far more likely to have used a systematic means of identifying catatonia, thereby explaining higher reported prevalence. estimated an incidence of catatonia in the general population, in London, UK, finding that catatonia occurred in 10.6 (95% CI: 10.0–11.1) per 100,000 person-years, but this also relied upon the mention of catatonia in the healthcare notes. In a large recent study in US non-federal general hospitals, a discharge diagnosis with an ICD-10 catatonia code occurred in 0.05% of hospital admissions . Some reports indicate a temporal decline in the diagnosis of catatonia in routinely collected data. described a drop in incidence of catatonic schizophrenia between the 1950s and 1970s in Finnish registry data, especially in the age group of 25–40 years. However, it is possible that this apparent decline is a result of changes in diagnostic practice rather than a true change in incidence. reported that the apparent decline in catatonia between 1980 and 2000 in routine diagnostic data from the Netherlands could be explained by a change in diagnostic habits. A sample of patients with more detailed clinical data illustrated a high frequency of catatonic presentations from 2001 to 2003. reported an increase in incidence between 2007 and 2016. The varying interest in catatonia and changes in diagnostic practice over time make the interpretation of time trend data very difficult. Several studies conducted in Western nations have found that catatonia was more common among individuals from ethnic minorities , often by a large margin. History and physical examination Studies commonly identify at least three factors or principal components of catatonia, which include hyperkinetic, hypokinetic and parakinetic (i.e. abnormal movements) phenotypes. Therefore, as a rule, catatonia should be considered as a differential diagnosis whenever a patient exhibits substantially altered levels of motor activity or abnormal behaviour, especially where it is grossly inappropriate to context. The diagnosis of catatonia can typically be made on clinical assessment alone, even though patients with catatonia are often unable to provide a clear narrative history. Collateral sources of information should be sought to clarify potential explanations for the presenting syndrome and time course. The clinician should seek detailed information regarding the patient’s medical, neurological and psychiatric history, along with exposure to or withdrawal from medications (plasma concentration measurement may be used to ascertain concordance where available), recreational substances and blood-borne or sexually transmitted infections . It is also important to obtain a detailed family medical, neurological and psychiatric history to identify potentially specific biological vulnerability. Physical examination is also essential . The overwhelming majority of patients with catatonia are assessed within secondary care , which seems appropriate given the complexities of management and the risks to the patient. Every patient presenting with a first lifetime episode of catatonia should receive a thorough evaluation for potential underlying medical disorders with a focus on relevant neurological conditions (see section ‘Aetiology’) . When a patient presents with a recurrent episode of catatonia, the assessing clinician should not presume that an adequate workup was completed previously; instead, the adequacy of prior medical evaluation should be confirmed. In addition, every time a patient presents with catatonia, a medical evaluation is important to address potential complications of catatonia , as well as for care planning. Patients who do not participate in clinical evaluation should be assessed for the capacity to refuse evaluation and care. This is particularly important whenever catatonia is considered because several features (e.g. stupor, mutism, negativism or withdrawal) can be hard to distinguish from volitional acts. The fluctuating nature of catatonic signs can also reinforce the misinterpretation of wilful non-engagement. It is also important to keep in mind that patients with catatonia often understand what others are saying yet are unaware of their inability to respond . As such, clinicians should speak to patients with catatonia as though they comprehend what is being told to them because they may; in fact, once catatonia resolves, patients may have vivid recall of what they experience while in a catatonic state. Reliable identification of catatonia requires deliberate assessment . Three primary means of assessment include clinical observation, elicitation and physical examination. The clinician should observe the patient before evaluation, often casually without drawing attention to the fact, while no one is interacting with them to evaluate for spontaneous expression of catatonic features. Observation should continue throughout and then after direct evaluation. Next, several features of catatonia must be elicited by environmental stimuli. For instance, demonstration of negativism requires that an instruction or prompt be given, and echophenomena require speech or behaviours to be mimicked. Assessment for catalepsy, rigidity and waxy flexibility (variously defined, see ) requires physical examination. Collateral information is needed to assess the extent and duration of withdrawal, and evaluation for autonomic abnormality involves assessment of vital signs, either by chart review or by obtaining them directly. Recommendations on the assessment of catatonia Initial assessment and treatment of catatonia should be conducted within secondary care. (S) Catatonia should be considered as a differential diagnosis whenever a patient exhibits a substantially altered level of activity or abnormal behaviour, especially where it is grossly inappropriate to the context. (D) A collateral history should be sought wherever possible. (S) The history should include identification of possible medical and psychiatric disorders underlying catatonia, as well as prior response to treatment. (S) Physical examination should include assessment for catatonic signs, signs of medical conditions that may have led to the catatonia and signs of medical complications of catatonia. (D) When assessing a patient with catatonia, clinicians should interact with the person as if they are able to understand what is being said to them. (S) In an individual who is suspected to have catatonia, non-engagement with clinical assessment should not automatically be assumed to be wilful. Mental capacity to engage in an assessment should be assessed and, if found lacking, consideration should be given to acting in an individual’s best interests within the appropriate legal framework. (S) Rating instruments Most catatonia rating instruments approach catatonia scoring in a polythetic fashion (i.e. any combination of a diverse range of clinical features can contribute towards reaching a threshold for caseness), with the Northoff Catatonia Rating Scale (NCRS) a notable exception . The Rogers Catatonia Scale was designed to differentiate catatonic depression from non-depressed patients with Parkinson’s disease. Its exclusive focus on motoric features of catatonia means that it has uncertain generalisability to other populations. It also omits several diagnostic criteria included in the ICD-11 . The Kanner scale also has a significant weakness in that it has yet to be validated in a clinical cohort. As such, both the Rogers and Kanner scales should be disfavoured from routine clinical use at this time. The Bräunig Catatonia Rating Scale has good psychometric properties and has been validated against the criteria for catatonia in DSM-III-R , although DSM-III-R is somewhat different from DSM-5-TR in this regard. The Bräunig scale was scored using a robust 45-min semi-structured interview, which is likely infeasible in routine clinical practice. It also has some idiosyncratic definitions of its motor signs . The two leading catatonia instruments are the Bush-Francis Catatonia Rating Scale (BFCRS) and NCRS , each with its unique strengths and weaknesses. The BFCRS is the most widely cited and clinically used scale worldwide. It has good psychometric properties and is the only scale to be validated by a lorazepam challenge . Its primary limitation is its idiosyncratic definition of waxy flexibility ; however, with slight adaptation, it assesses all DSM-5-TR criteria in its screening instrument alone , which makes for an efficient clinical evaluation. The full 23-item scale evaluates all ICD-11 catatonia criteria. The BFCRS scale was originally validated using a standardised clinical exam against other clinical criteria . It has been found to be sensitive to change in clinical status in response to treatment . The exam has been further refined in a Training Manual for the BFCRS and depicted in videographic educational resources, all freely available online at https://bfcrs.urmc.edu . The NCRS has good psychometric properties and offers the most comprehensive evaluation of catatonic signs. It divides its 40 items into three categories: behaviour (15 items), motor (13 items) and affective (12 items). The NCRS assesses for all diagnostic criteria of catatonia in the DSM-5-TR and ICD-11 , and its definitions of motoric findings are consistent with their definitions in these diagnostic systems as well. Among catatonia scales, the NCRS uniquely emphasises affective features. Notably, the NCRS differs from other scales by requiring the presence at least one feature in each of its three domains (i.e. motor, affective and behavioural). Although such an approach is supported by Kahlbaum’s original description and some studies on subjective reports of catatonia , it is not supported by DSM-5-TR or ICD-11 . With such a broad range of clinical features evaluated, the NCRS’s lack of a standardised clinical assessment is a significant limitation to its reliability. Although most scales report high interrater reliability in published studies (see for a detailed overview), this finding does not necessarily translate to the accurate use of a scale in clinical practice. There is evidence that training using videographic resources can improve use of the BFCRS . The results of a catatonia rating scale should be converted to diagnostic criteria for clinical diagnosis . Recommendation on the use of rating instruments When assessing for the presence of catatonia or its response to treatment, a validated instrument such as the BFCRS or the NCRS should be used. (C) Research on catatonia should report how individual items have been defined, including thresholds. (S) Investigations The diagnosis of catatonia is made through clinical observation, interview and physical examination of the patient, as well as from collateral information from carers and review of the medical record, and in general is not established through clinical investigations (e.g. laboratory tests, brain imaging, EEG, cerebrospinal fluid (CSF) analysis, urine drug screen). Clinical investigations should be ordered based on history and clinical examination findings, taking into consideration the overall severity of illness as well as medical and psychiatric comorbid illnesses. Medical investigations are typically performed to rule out catatonia-like conditions or to understand the underlying aetiology of catatonia as this informs treatment and prognosis. Although catatonia is not diagnosed through neuroimaging, given the large number of neurological conditions associated with catatonia (see ), brain imaging is often requested as part of the medical evaluation of a patient with catatonia. A systematic review of structural and functional brain imaging in catatonia, which identified 137 case reports and 18 studies with multiple patients (pooled n = 186), found that more than 75% of cases of catatonia were associated with non-focal brain imaging abnormalities affecting several brain regions, and associated with a variety of underlying conditions, including neuroinflammatory conditions (SLE, encephalitis) . The most common abnormalities in catatonia are generalised atrophy and non-specific white matter abnormalities . Even less is known about laboratory abnormalities present in patients experiencing catatonia. In a case–control study of 1456 patients with catatonia and 24,956 psychiatric inpatient controls, serum iron was reduced in catatonia cases (11.6 vs 14.2 μmol/L, odds ratio (OR): 0.65; 95% CI: 0.45–0.95), creatine kinase (CK) was raised (2545 vs 459 IU/L, OR: 1.53; 95% CI: 1.29–1.81), but there was no difference in C-reactive protein or white blood cell count , though analysis relied on a small subset of the patients with laboratory results. N -methyl-D-aspartate (NMDA) receptor antibodies were significantly associated with catatonia, but there were only a small number of cases . However, it should be noted that there is a strong association between anti-NMDA receptor encephalitis and catatonia, with most patients with this form of autoimmune encephalitis experiencing catatonia at some point in their illness . Other autoantibodies have also been identified in association with catatonia including anti-Hu antibodies, anti-myelin oligodendrocyte glycoprotein antibodies, antinuclear antibodies (ANA), antiphospholipid antibodies, anti-ribosomal P antibodies, anti-Ro antibodies, anti-Smith antibodies, double-stranded DNA antibodies, GABA-A receptor antibodies, GAD-65 antibodies, leucine-rich glioma-inactivated 1 antibodies, ribonucleoprotein antibodies and septin-7 antibodies . However, the prevalence and pathogenicity of these antibodies in catatonia is unclear, although it is a rapidly expanding field . In terms of neurophysiology, there is a clear case for an electroencephalogram (EEG) in the context of possible non-convulsive status epilepticus (NCSE), which can present as catatonia . Red flags for NCSE include subtle ictal phenomena (such as twitching of the face or extremities), comorbid neurological disease and a change in medications that affect seizure threshold . Another quite specific EEG finding of relevance to catatonia is the extreme delta brush, which occurs in some patients with anti-NMDA receptor encephalitis . The literature on the value of ‘encephalopathic’ findings on EEGs suggests that this is not entirely specific for a medical disorder underlying catatonia . Any hospital work-up must weigh the potential risks and benefits of detailed investigation. Hospital investigations may contribute to anxiety . Given that several studies have associated catatonia with intense anxiety , prolonged uncertainty amid medical testing may be expected to worsen this in some patients. In addition, the costs and potential harms of investigation (e.g. radiation exposure with computed tomography (CT) imaging, or magnetic resonance imaging (MRI) scans in patients who are unable to communicate whether they have any metallic implants) must be considered. Recommendations on the use of investigations in catatonia Investigations, such as blood tests, urine drug screen, lumbar puncture, electroencephalography and neuroimaging, should be considered based on history and examination findings, taking into account the possible diagnoses that may mimic catatonia and the possible underlying aetiology of the catatonia. (D) In patients experiencing a first episode of catatonia or where the diagnosis underlying catatonia is unclear, consider a CT or MRI scan of the brain. (C) In patients experiencing a first episode of catatonia or where the diagnosis underlying catatonia is unclear, consider assessing for the presence of antibodies to the NMDA receptor and other relevant autoantibodies in serum and CSF. (D) In patients with risk factors for seizures, possible evidence of a seizure or possible encephalitis, consider performing an EEG (with continuous monitoring if available). (C) Challenge tests DSM-5-TR has included a diagnosis of unspecified catatonia to encourage early treatment while a search for an underlying disorder can continue. Challenge tests may provide support in clarifying diagnosis and appropriate treatment. This section is limited to the use of benzodiazepines and zolpidem as a diagnostic and therapeutic ‘challenge test’. These agents are discussed in greater detail in section ‘GABA-ergic pharmacotherapies’. In 1930, Bleckwenn described the use of short-acting barbiturates to ‘render catatonic patients responsive’ . Lorazepam (and to a limited extent, other benzodiazepines, such as diazepam, midazolam, clonazepam and oxazepam have now replaced the use of barbiturates (such as amobarbital and sodium thiopental) as a diagnostic challenge (sometimes called the lorazepam test or the diazepam test) for confirming the diagnosis of catatonia . Lorazepam and other benzodiazepines Lorazepam is an effective and clinically useful diagnostic challenge test for catatonia. It is available in oral, liquid, intramuscular (IM) and intravenous (IV) forms, and is available in a variety of clinical settings. Lorazepam is a non-selective positive allosteric modulator of GABA-A receptors. Possible therapeutic mechanisms in catatonia are discussed in section ‘GABA-ergic pharmacotherapies’. The recommended dose for a lorazepam challenge is 1–2 mg IV , IM or 2 mg oral . The response to an oral challenge is often slower than for parenteral administration and oral formulations can be harder to administer to both hyperkinetic and hypokinetic patients. A positive response to a lorazepam challenge, commonly defined as a 50% reduction in catatonic signs on a standardised scale, makes a diagnosis of catatonia more likely, but it is not 100% specific. A good response on the first day appears predictive of overall response to lorazepam . Low serum iron has been reported as a predictor of poor response with benzodiazepines . An example protocol is provided in . Based on their clinical effectiveness in these conditions, benzodiazepines may also be considered as a therapeutic test in antipsychotic-induced catatonia , NMS and malignant catatonia. Zolpidem described a serendipitous dramatic response to oral zolpidem 10 mg in a woman with a subcortical stroke whose catatonia was largely unresponsive to lorazepam or ECT. This was followed by other positive reports . The response is transitory, as with benzodiazepines, and is usually observed for 3–6 h , which is consistent with zolpidem’s short elimination half-life of 1–4 h . Catatonia has also been reported in zolpidem withdrawal . Several reports have been published of zolpidem’s effectiveness following neurological injury due to a variety of different brain insults . It is not clear whether some of these cases following brain injury had undiagnosed catatonia. It appears that the positive effect of zolpidem in post-brain injury states occurs at a sub-sedative dose , and there is a suggestion of a differential response in patients with traumatic or anoxic brain injury . Zolpidem is an imidazopyridine that is a selective positive modulator of the GABA-A alpha-1 subunit and this action appears to be important for its clinical efficacy . It seems selective for the gamma-2 subunit of the GABA-A receptor (alpha1-beta2-gamma2 GABA-A receptor) in animal experiments , but the implications of this in zolpidem’s efficacy as a diagnostic challenge tool are not entirely clear. The recommended dose of zolpidem is usually 10 mg orally for a diagnostic and/or therapeutic test , but 5 mg has sometimes been used in older patients . Zolpidem is available in oral formulation (and as a sublingual preparation in some countries), with no parenteral preparation available, which somewhat limits its use. A therapeutic plasma concentration of 80–150 ng/L has been suggested, with an onset of action within 10–30 min of ingestion of 10 mg zolpidem . showed that mutism is not a good prognostic sign for lorazepam response, so it is interesting that zolpidem may differentially help improve impairment of verbal fluency in patients with catatonia . Other drugs In contrast to reports of ketamine causing catatonic signs, there is at least one report of slow IV injection of sub-anaesthetic doses of ketamine (12.5 mg) producing dramatic improvement in catatonic signs . More studies, including randomised controlled trials (RCTs), are needed before this translates into clinical practice as a diagnostic test. Recommendations on the use of challenge test When a diagnosis of catatonia is uncertain, a diagnostic challenge using lorazepam should be considered. (B) When a diagnosis of catatonia is uncertain, a diagnostic challenge using zolpidem may be considered. (C) In suspected or confirmed cases of catatonia, a lorazepam challenge may be used to predict future response to benzodiazepines. (B) Differential diagnosis There is some overlap between the differential diagnosis of catatonia (i.e. mimics of catatonia) and the conditions that may underlie catatonia. For example, NMS is sometimes listed in both categories, probably because of diverging views as to what extent it represents a form of catatonia (see section ‘Neuroleptic malignant syndrome’). For some conditions, their status is subject to debate. In , we provide a list of some of the more important conditions that may mimic catatonia, what the similarities are and how they can be differentiated. As general principles, the positive features of catatonia (such as echophenomena, catalepsy and posturing) may have greater discriminatory value than some of the negative features (such as mutism and stupor). Challenge tests are useful in many situations (see section ‘Challenge tests’), but their sensitivity and specificity are imperfect; importantly, stiff person syndrome and NCSE are likely to improve with a lorazepam challenge. Although it has been asserted that serotonin syndrome (SS) is a form of catatonia , there is currently insufficient systematic evidence to support this claim . Furthermore, although ECT, a core intervention for catatonia, has been advocated for the treatment of SS , recent reports suggest that it is ineffective and, in fact, may exacerbate SS . Studies commonly identify at least three factors or principal components of catatonia, which include hyperkinetic, hypokinetic and parakinetic (i.e. abnormal movements) phenotypes. Therefore, as a rule, catatonia should be considered as a differential diagnosis whenever a patient exhibits substantially altered levels of motor activity or abnormal behaviour, especially where it is grossly inappropriate to context. The diagnosis of catatonia can typically be made on clinical assessment alone, even though patients with catatonia are often unable to provide a clear narrative history. Collateral sources of information should be sought to clarify potential explanations for the presenting syndrome and time course. The clinician should seek detailed information regarding the patient’s medical, neurological and psychiatric history, along with exposure to or withdrawal from medications (plasma concentration measurement may be used to ascertain concordance where available), recreational substances and blood-borne or sexually transmitted infections . It is also important to obtain a detailed family medical, neurological and psychiatric history to identify potentially specific biological vulnerability. Physical examination is also essential . The overwhelming majority of patients with catatonia are assessed within secondary care , which seems appropriate given the complexities of management and the risks to the patient. Every patient presenting with a first lifetime episode of catatonia should receive a thorough evaluation for potential underlying medical disorders with a focus on relevant neurological conditions (see section ‘Aetiology’) . When a patient presents with a recurrent episode of catatonia, the assessing clinician should not presume that an adequate workup was completed previously; instead, the adequacy of prior medical evaluation should be confirmed. In addition, every time a patient presents with catatonia, a medical evaluation is important to address potential complications of catatonia , as well as for care planning. Patients who do not participate in clinical evaluation should be assessed for the capacity to refuse evaluation and care. This is particularly important whenever catatonia is considered because several features (e.g. stupor, mutism, negativism or withdrawal) can be hard to distinguish from volitional acts. The fluctuating nature of catatonic signs can also reinforce the misinterpretation of wilful non-engagement. It is also important to keep in mind that patients with catatonia often understand what others are saying yet are unaware of their inability to respond . As such, clinicians should speak to patients with catatonia as though they comprehend what is being told to them because they may; in fact, once catatonia resolves, patients may have vivid recall of what they experience while in a catatonic state. Reliable identification of catatonia requires deliberate assessment . Three primary means of assessment include clinical observation, elicitation and physical examination. The clinician should observe the patient before evaluation, often casually without drawing attention to the fact, while no one is interacting with them to evaluate for spontaneous expression of catatonic features. Observation should continue throughout and then after direct evaluation. Next, several features of catatonia must be elicited by environmental stimuli. For instance, demonstration of negativism requires that an instruction or prompt be given, and echophenomena require speech or behaviours to be mimicked. Assessment for catalepsy, rigidity and waxy flexibility (variously defined, see ) requires physical examination. Collateral information is needed to assess the extent and duration of withdrawal, and evaluation for autonomic abnormality involves assessment of vital signs, either by chart review or by obtaining them directly. Recommendations on the assessment of catatonia Initial assessment and treatment of catatonia should be conducted within secondary care. (S) Catatonia should be considered as a differential diagnosis whenever a patient exhibits a substantially altered level of activity or abnormal behaviour, especially where it is grossly inappropriate to the context. (D) A collateral history should be sought wherever possible. (S) The history should include identification of possible medical and psychiatric disorders underlying catatonia, as well as prior response to treatment. (S) Physical examination should include assessment for catatonic signs, signs of medical conditions that may have led to the catatonia and signs of medical complications of catatonia. (D) When assessing a patient with catatonia, clinicians should interact with the person as if they are able to understand what is being said to them. (S) In an individual who is suspected to have catatonia, non-engagement with clinical assessment should not automatically be assumed to be wilful. Mental capacity to engage in an assessment should be assessed and, if found lacking, consideration should be given to acting in an individual’s best interests within the appropriate legal framework. (S) Most catatonia rating instruments approach catatonia scoring in a polythetic fashion (i.e. any combination of a diverse range of clinical features can contribute towards reaching a threshold for caseness), with the Northoff Catatonia Rating Scale (NCRS) a notable exception . The Rogers Catatonia Scale was designed to differentiate catatonic depression from non-depressed patients with Parkinson’s disease. Its exclusive focus on motoric features of catatonia means that it has uncertain generalisability to other populations. It also omits several diagnostic criteria included in the ICD-11 . The Kanner scale also has a significant weakness in that it has yet to be validated in a clinical cohort. As such, both the Rogers and Kanner scales should be disfavoured from routine clinical use at this time. The Bräunig Catatonia Rating Scale has good psychometric properties and has been validated against the criteria for catatonia in DSM-III-R , although DSM-III-R is somewhat different from DSM-5-TR in this regard. The Bräunig scale was scored using a robust 45-min semi-structured interview, which is likely infeasible in routine clinical practice. It also has some idiosyncratic definitions of its motor signs . The two leading catatonia instruments are the Bush-Francis Catatonia Rating Scale (BFCRS) and NCRS , each with its unique strengths and weaknesses. The BFCRS is the most widely cited and clinically used scale worldwide. It has good psychometric properties and is the only scale to be validated by a lorazepam challenge . Its primary limitation is its idiosyncratic definition of waxy flexibility ; however, with slight adaptation, it assesses all DSM-5-TR criteria in its screening instrument alone , which makes for an efficient clinical evaluation. The full 23-item scale evaluates all ICD-11 catatonia criteria. The BFCRS scale was originally validated using a standardised clinical exam against other clinical criteria . It has been found to be sensitive to change in clinical status in response to treatment . The exam has been further refined in a Training Manual for the BFCRS and depicted in videographic educational resources, all freely available online at https://bfcrs.urmc.edu . The NCRS has good psychometric properties and offers the most comprehensive evaluation of catatonic signs. It divides its 40 items into three categories: behaviour (15 items), motor (13 items) and affective (12 items). The NCRS assesses for all diagnostic criteria of catatonia in the DSM-5-TR and ICD-11 , and its definitions of motoric findings are consistent with their definitions in these diagnostic systems as well. Among catatonia scales, the NCRS uniquely emphasises affective features. Notably, the NCRS differs from other scales by requiring the presence at least one feature in each of its three domains (i.e. motor, affective and behavioural). Although such an approach is supported by Kahlbaum’s original description and some studies on subjective reports of catatonia , it is not supported by DSM-5-TR or ICD-11 . With such a broad range of clinical features evaluated, the NCRS’s lack of a standardised clinical assessment is a significant limitation to its reliability. Although most scales report high interrater reliability in published studies (see for a detailed overview), this finding does not necessarily translate to the accurate use of a scale in clinical practice. There is evidence that training using videographic resources can improve use of the BFCRS . The results of a catatonia rating scale should be converted to diagnostic criteria for clinical diagnosis . Recommendation on the use of rating instruments When assessing for the presence of catatonia or its response to treatment, a validated instrument such as the BFCRS or the NCRS should be used. (C) Research on catatonia should report how individual items have been defined, including thresholds. (S) The diagnosis of catatonia is made through clinical observation, interview and physical examination of the patient, as well as from collateral information from carers and review of the medical record, and in general is not established through clinical investigations (e.g. laboratory tests, brain imaging, EEG, cerebrospinal fluid (CSF) analysis, urine drug screen). Clinical investigations should be ordered based on history and clinical examination findings, taking into consideration the overall severity of illness as well as medical and psychiatric comorbid illnesses. Medical investigations are typically performed to rule out catatonia-like conditions or to understand the underlying aetiology of catatonia as this informs treatment and prognosis. Although catatonia is not diagnosed through neuroimaging, given the large number of neurological conditions associated with catatonia (see ), brain imaging is often requested as part of the medical evaluation of a patient with catatonia. A systematic review of structural and functional brain imaging in catatonia, which identified 137 case reports and 18 studies with multiple patients (pooled n = 186), found that more than 75% of cases of catatonia were associated with non-focal brain imaging abnormalities affecting several brain regions, and associated with a variety of underlying conditions, including neuroinflammatory conditions (SLE, encephalitis) . The most common abnormalities in catatonia are generalised atrophy and non-specific white matter abnormalities . Even less is known about laboratory abnormalities present in patients experiencing catatonia. In a case–control study of 1456 patients with catatonia and 24,956 psychiatric inpatient controls, serum iron was reduced in catatonia cases (11.6 vs 14.2 μmol/L, odds ratio (OR): 0.65; 95% CI: 0.45–0.95), creatine kinase (CK) was raised (2545 vs 459 IU/L, OR: 1.53; 95% CI: 1.29–1.81), but there was no difference in C-reactive protein or white blood cell count , though analysis relied on a small subset of the patients with laboratory results. N -methyl-D-aspartate (NMDA) receptor antibodies were significantly associated with catatonia, but there were only a small number of cases . However, it should be noted that there is a strong association between anti-NMDA receptor encephalitis and catatonia, with most patients with this form of autoimmune encephalitis experiencing catatonia at some point in their illness . Other autoantibodies have also been identified in association with catatonia including anti-Hu antibodies, anti-myelin oligodendrocyte glycoprotein antibodies, antinuclear antibodies (ANA), antiphospholipid antibodies, anti-ribosomal P antibodies, anti-Ro antibodies, anti-Smith antibodies, double-stranded DNA antibodies, GABA-A receptor antibodies, GAD-65 antibodies, leucine-rich glioma-inactivated 1 antibodies, ribonucleoprotein antibodies and septin-7 antibodies . However, the prevalence and pathogenicity of these antibodies in catatonia is unclear, although it is a rapidly expanding field . In terms of neurophysiology, there is a clear case for an electroencephalogram (EEG) in the context of possible non-convulsive status epilepticus (NCSE), which can present as catatonia . Red flags for NCSE include subtle ictal phenomena (such as twitching of the face or extremities), comorbid neurological disease and a change in medications that affect seizure threshold . Another quite specific EEG finding of relevance to catatonia is the extreme delta brush, which occurs in some patients with anti-NMDA receptor encephalitis . The literature on the value of ‘encephalopathic’ findings on EEGs suggests that this is not entirely specific for a medical disorder underlying catatonia . Any hospital work-up must weigh the potential risks and benefits of detailed investigation. Hospital investigations may contribute to anxiety . Given that several studies have associated catatonia with intense anxiety , prolonged uncertainty amid medical testing may be expected to worsen this in some patients. In addition, the costs and potential harms of investigation (e.g. radiation exposure with computed tomography (CT) imaging, or magnetic resonance imaging (MRI) scans in patients who are unable to communicate whether they have any metallic implants) must be considered. Recommendations on the use of investigations in catatonia Investigations, such as blood tests, urine drug screen, lumbar puncture, electroencephalography and neuroimaging, should be considered based on history and examination findings, taking into account the possible diagnoses that may mimic catatonia and the possible underlying aetiology of the catatonia. (D) In patients experiencing a first episode of catatonia or where the diagnosis underlying catatonia is unclear, consider a CT or MRI scan of the brain. (C) In patients experiencing a first episode of catatonia or where the diagnosis underlying catatonia is unclear, consider assessing for the presence of antibodies to the NMDA receptor and other relevant autoantibodies in serum and CSF. (D) In patients with risk factors for seizures, possible evidence of a seizure or possible encephalitis, consider performing an EEG (with continuous monitoring if available). (C) DSM-5-TR has included a diagnosis of unspecified catatonia to encourage early treatment while a search for an underlying disorder can continue. Challenge tests may provide support in clarifying diagnosis and appropriate treatment. This section is limited to the use of benzodiazepines and zolpidem as a diagnostic and therapeutic ‘challenge test’. These agents are discussed in greater detail in section ‘GABA-ergic pharmacotherapies’. In 1930, Bleckwenn described the use of short-acting barbiturates to ‘render catatonic patients responsive’ . Lorazepam (and to a limited extent, other benzodiazepines, such as diazepam, midazolam, clonazepam and oxazepam have now replaced the use of barbiturates (such as amobarbital and sodium thiopental) as a diagnostic challenge (sometimes called the lorazepam test or the diazepam test) for confirming the diagnosis of catatonia . Lorazepam and other benzodiazepines Lorazepam is an effective and clinically useful diagnostic challenge test for catatonia. It is available in oral, liquid, intramuscular (IM) and intravenous (IV) forms, and is available in a variety of clinical settings. Lorazepam is a non-selective positive allosteric modulator of GABA-A receptors. Possible therapeutic mechanisms in catatonia are discussed in section ‘GABA-ergic pharmacotherapies’. The recommended dose for a lorazepam challenge is 1–2 mg IV , IM or 2 mg oral . The response to an oral challenge is often slower than for parenteral administration and oral formulations can be harder to administer to both hyperkinetic and hypokinetic patients. A positive response to a lorazepam challenge, commonly defined as a 50% reduction in catatonic signs on a standardised scale, makes a diagnosis of catatonia more likely, but it is not 100% specific. A good response on the first day appears predictive of overall response to lorazepam . Low serum iron has been reported as a predictor of poor response with benzodiazepines . An example protocol is provided in . Based on their clinical effectiveness in these conditions, benzodiazepines may also be considered as a therapeutic test in antipsychotic-induced catatonia , NMS and malignant catatonia. Zolpidem described a serendipitous dramatic response to oral zolpidem 10 mg in a woman with a subcortical stroke whose catatonia was largely unresponsive to lorazepam or ECT. This was followed by other positive reports . The response is transitory, as with benzodiazepines, and is usually observed for 3–6 h , which is consistent with zolpidem’s short elimination half-life of 1–4 h . Catatonia has also been reported in zolpidem withdrawal . Several reports have been published of zolpidem’s effectiveness following neurological injury due to a variety of different brain insults . It is not clear whether some of these cases following brain injury had undiagnosed catatonia. It appears that the positive effect of zolpidem in post-brain injury states occurs at a sub-sedative dose , and there is a suggestion of a differential response in patients with traumatic or anoxic brain injury . Zolpidem is an imidazopyridine that is a selective positive modulator of the GABA-A alpha-1 subunit and this action appears to be important for its clinical efficacy . It seems selective for the gamma-2 subunit of the GABA-A receptor (alpha1-beta2-gamma2 GABA-A receptor) in animal experiments , but the implications of this in zolpidem’s efficacy as a diagnostic challenge tool are not entirely clear. The recommended dose of zolpidem is usually 10 mg orally for a diagnostic and/or therapeutic test , but 5 mg has sometimes been used in older patients . Zolpidem is available in oral formulation (and as a sublingual preparation in some countries), with no parenteral preparation available, which somewhat limits its use. A therapeutic plasma concentration of 80–150 ng/L has been suggested, with an onset of action within 10–30 min of ingestion of 10 mg zolpidem . showed that mutism is not a good prognostic sign for lorazepam response, so it is interesting that zolpidem may differentially help improve impairment of verbal fluency in patients with catatonia . Other drugs In contrast to reports of ketamine causing catatonic signs, there is at least one report of slow IV injection of sub-anaesthetic doses of ketamine (12.5 mg) producing dramatic improvement in catatonic signs . More studies, including randomised controlled trials (RCTs), are needed before this translates into clinical practice as a diagnostic test. Recommendations on the use of challenge test When a diagnosis of catatonia is uncertain, a diagnostic challenge using lorazepam should be considered. (B) When a diagnosis of catatonia is uncertain, a diagnostic challenge using zolpidem may be considered. (C) In suspected or confirmed cases of catatonia, a lorazepam challenge may be used to predict future response to benzodiazepines. (B) Lorazepam is an effective and clinically useful diagnostic challenge test for catatonia. It is available in oral, liquid, intramuscular (IM) and intravenous (IV) forms, and is available in a variety of clinical settings. Lorazepam is a non-selective positive allosteric modulator of GABA-A receptors. Possible therapeutic mechanisms in catatonia are discussed in section ‘GABA-ergic pharmacotherapies’. The recommended dose for a lorazepam challenge is 1–2 mg IV , IM or 2 mg oral . The response to an oral challenge is often slower than for parenteral administration and oral formulations can be harder to administer to both hyperkinetic and hypokinetic patients. A positive response to a lorazepam challenge, commonly defined as a 50% reduction in catatonic signs on a standardised scale, makes a diagnosis of catatonia more likely, but it is not 100% specific. A good response on the first day appears predictive of overall response to lorazepam . Low serum iron has been reported as a predictor of poor response with benzodiazepines . An example protocol is provided in . Based on their clinical effectiveness in these conditions, benzodiazepines may also be considered as a therapeutic test in antipsychotic-induced catatonia , NMS and malignant catatonia. described a serendipitous dramatic response to oral zolpidem 10 mg in a woman with a subcortical stroke whose catatonia was largely unresponsive to lorazepam or ECT. This was followed by other positive reports . The response is transitory, as with benzodiazepines, and is usually observed for 3–6 h , which is consistent with zolpidem’s short elimination half-life of 1–4 h . Catatonia has also been reported in zolpidem withdrawal . Several reports have been published of zolpidem’s effectiveness following neurological injury due to a variety of different brain insults . It is not clear whether some of these cases following brain injury had undiagnosed catatonia. It appears that the positive effect of zolpidem in post-brain injury states occurs at a sub-sedative dose , and there is a suggestion of a differential response in patients with traumatic or anoxic brain injury . Zolpidem is an imidazopyridine that is a selective positive modulator of the GABA-A alpha-1 subunit and this action appears to be important for its clinical efficacy . It seems selective for the gamma-2 subunit of the GABA-A receptor (alpha1-beta2-gamma2 GABA-A receptor) in animal experiments , but the implications of this in zolpidem’s efficacy as a diagnostic challenge tool are not entirely clear. The recommended dose of zolpidem is usually 10 mg orally for a diagnostic and/or therapeutic test , but 5 mg has sometimes been used in older patients . Zolpidem is available in oral formulation (and as a sublingual preparation in some countries), with no parenteral preparation available, which somewhat limits its use. A therapeutic plasma concentration of 80–150 ng/L has been suggested, with an onset of action within 10–30 min of ingestion of 10 mg zolpidem . showed that mutism is not a good prognostic sign for lorazepam response, so it is interesting that zolpidem may differentially help improve impairment of verbal fluency in patients with catatonia . In contrast to reports of ketamine causing catatonic signs, there is at least one report of slow IV injection of sub-anaesthetic doses of ketamine (12.5 mg) producing dramatic improvement in catatonic signs . More studies, including randomised controlled trials (RCTs), are needed before this translates into clinical practice as a diagnostic test. Recommendations on the use of challenge test When a diagnosis of catatonia is uncertain, a diagnostic challenge using lorazepam should be considered. (B) When a diagnosis of catatonia is uncertain, a diagnostic challenge using zolpidem may be considered. (C) In suspected or confirmed cases of catatonia, a lorazepam challenge may be used to predict future response to benzodiazepines. (B) There is some overlap between the differential diagnosis of catatonia (i.e. mimics of catatonia) and the conditions that may underlie catatonia. For example, NMS is sometimes listed in both categories, probably because of diverging views as to what extent it represents a form of catatonia (see section ‘Neuroleptic malignant syndrome’). For some conditions, their status is subject to debate. In , we provide a list of some of the more important conditions that may mimic catatonia, what the similarities are and how they can be differentiated. As general principles, the positive features of catatonia (such as echophenomena, catalepsy and posturing) may have greater discriminatory value than some of the negative features (such as mutism and stupor). Challenge tests are useful in many situations (see section ‘Challenge tests’), but their sensitivity and specificity are imperfect; importantly, stiff person syndrome and NCSE are likely to improve with a lorazepam challenge. Although it has been asserted that serotonin syndrome (SS) is a form of catatonia , there is currently insufficient systematic evidence to support this claim . Furthermore, although ECT, a core intervention for catatonia, has been advocated for the treatment of SS , recent reports suggest that it is ineffective and, in fact, may exacerbate SS . General approach The evidence base for the treatment of catatonia is not extensive. Several RCTs have been conducted, but they have usually been at high risk of bias, inadequately reported, using outdated treatments or applicable to only a small subset of patients with catatonia . One systematic review found only four studies that had more than 50 participants . Nonetheless, where there is converging evidence from multiple sources, some clinically relevant inferences can be made. Many treatments for catatonia are unlicensed applications for licensed medicines. Relevant guidance on this issue has been produced by the General Medical Council, the Royal College of Psychiatrists in association with the BAP, and the Royal College of Paediatrics and Child Health . While this guidance recommends that prescribing should usually be within a product’s licence, it is recognised that there are situations in which prescribing off-licence is appropriate. Beyond the common standards for good prescribing, it is advised to use licensed medications in preference where appropriate, to be familiar and satisfied with evidence for safety and efficacy, to seek advice where necessary, giving sufficient information to patients, to inform patients that a medicine is being used outside its licence, to take consent or to document where this is not possible, to start at a low dose and to inform other professionals that the medicine is being used off-licence. There are two distinct aspects to treating catatonia: specific treatments for catatonia per se and treatments for the disorder(s) underlying catatonia, where identified. While employing either one of these approaches may be effective in some cases, there are many cases where using either one of these strategies alone fails but using the other or a combination of the two is successful . In addition, consideration must be given to the prevention and management of the medical complications of catatonia. First-line treatment Several studies have found that response to catatonia treatment is more likely or more rapid in patients with a shorter duration of illness , although this has not universally been the case . Given this preponderance of evidence and the likely explanation that catatonia becomes less treatment-responsive with time, we recommend treating catatonia as soon as possible after its identification. In terms of first-line treatments, there is most evidence for benzodiazepines and ECT . We provide more detail about these treatments in sections ‘GABA-ergic pharmacotherapies’ and ‘Electroconvulsive therapy’, but here we consider the question of which to use as first-line therapy. Response rates are similar: 59–100% for ECT and 66–100% in Western studies of benzodiazepines (although some Asian studies found lower response rates) . If one treatment is contraindicated, this makes the decision simpler. Beyond this, consideration should be given to the potential of ECT to ameliorate a disorder underlying the catatonia (NICE recommends ECT for severe depression and prolonged or severe mania in certain circumstances; , ), balancing the side effects of ECT (particularly the small risk associated with a general anaesthetic, risk of status epilepticus, post-ictal confusion and autobiographical memory loss) and the side effects of benzodiazepines (particularly respiratory depression, sedation and amnesia). Other considerations more specific to ECT include often limited availability, delays in accessing care, legal issues obtaining consent and patient preferences. There are several studies of ECT after benzodiazepines have been ineffective, reporting high response rates . There is a case series and uncontrolled cohort study suggesting that the combination of benzodiazepines and ECT may be effective . There are several special cases to these recommendations about first-line treatment, which are as follows: Clozapine-withdrawal catatonia: a systematic review of case reports found that restarting clozapine or using ECT were the most effective treatment strategies, while benzodiazepines were less effective . Benzodiazepine-withdrawal catatonia: a systematic review of case reports found that reinstating benzodiazepines was generally effective . Catatonia in autism spectrum disorder: see section ‘Autism spectrum disorder’. Chronic, milder catatonia in the context of schizophrenia: there is some evidence that this tends not to respond to benzodiazepines or ECT . There is some evidence based on observational data that these patients may respond to clozapine . There have been rare cases of cardiorespiratory arrest associated with the concomitant use of clozapine and benzodiazepines , so caution should be exercised if there is co-administration. Malignant catatonia: see section ‘Periodic catatonia’. NMS: see section ‘Neuroleptic malignant syndrome’. Antipsychotic-induced catatonia: see section ‘Antipsychotic-induced catatonia’. Women in the perinatal period: see section ‘The perinatal period’. Non-response Where benzodiazepines or ECT do not succeed in achieving remission of catatonia, it is important to re-evaluate the diagnosis. In one study of 21 patients who entered an RCT for catatonia, 2 of the non-responders were subsequently diagnosed with Parkinson’s disease . For alternative treatment approaches, see section ‘Other therapies’. Underlying condition Alongside treating the catatonia, it is important to treat any underlying disorder. This may involve psychotropic medications (e.g. antidepressants), other medical therapies (e.g. antibiotics, immunosuppressants) or even occasionally surgical treatments (e.g. tumour resection in the case of a paraneoplastic syndrome). Guidelines for treating relevant psychiatric disorders are available from the BAP . There is some controversy over the use of antipsychotic medications in catatonia, which is discussed in section ‘Dopamine receptor antagonists and partial agonists’. Complications Some, though not all, studies have associated catatonia with an increased mortality . There is an extensive case report literature on the medical complications of catatonia and a large cohort study of patients with schizophrenia found that those with catatonic stupor had an increased risk of various infections (pneumonia, urinary tract infection and sepsis), disseminated intravascular coagulation, rhabdomyolysis, dehydration, deep vein thrombosis, pulmonary embolus, urinary retention, decubitus ulcers, cardiac arrhythmia, renal failure, NMS, hypernatraemia and liver dysfunction . Guidance has been developed for averting such complications, which include recommendations such as pharmacological thromboprophylaxis, frequent assessment of pressure areas, stretching to avoid muscle contractures and consideration of artificial feeding . Recommendations on the general approach to treating catatonia Treatment for catatonia should be instituted quickly after identification of catatonia and it is not always necessary to await results of all investigations before commencing treatment. (D) Prescribing outside of a product licence is often justified in catatonia, but where a prescriber does this, they should take particular care to provide information to the patient or carer and obtain consent, where possible, taking advice where necessary. (S) Catatonia treatment should consist of specific treatment for the catatonia, treatment of any underlying disorder and prevention and management of complications of catatonia. (S) First-line treatment for catatonia should usually consist of a trial of benzodiazepines and/or ECT, (C) but see references to special cases in ‘First-line treatment’ and below. ECT should be available in any settings where catatonia may be treated, including in psychiatric and general hospitals. (S) When deciding between benzodiazepines and ECT as a first-line treatment, consider the following factors: side effect profile, whether there is an underlying disorder that is likely to be responsive to ECT (such as depression or mania) and availability of ECT. (S) Where benzodiazepines have not resulted in remission, ECT should be used. (B) For details of what an adequate trial of benzodiazepines consists of, see section ‘GABA-ergic pharmacotherapies’. Where catatonia has resulted from clozapine withdrawal, restart clozapine if possible and, if necessary, use ECT. (D) Where catatonia has resulted from benzodiazepine withdrawal, restart a benzodiazepine. (D) If catatonia is chronic and mild in the context of schizophrenia, consider a trial of clozapine. (C) If clozapine and benzodiazepines are administered concomitantly, titrate slowly and closely monitor vital signs. (S) Where catatonia does not respond to first-line therapy, re-evaluate the diagnosis. (D) GABA-ergic pharmacotherapies Evidence for pharmacotherapies for catatonia that augment GABA-ergic signalling pathways is supported by neuroimaging studies. conducted an iomazenil GABA-SPECT study and found that patients with catatonia (in a post-acute state) showed significantly lower iomazenil binding in the sensorimotor cortex as well as in the parietal cortex and prefrontal cortex (PFC). The same group was followed up in post-acute catatonia with a subsequent functional MRI (fMRI) study where emotional stimulation was applied before and after lorazepam administration: the orbitofrontal-ventromedial PFC was particularly responsive to a lorazepam challenge, normalising its activity . The involvement of the orbitofrontal-ventromedial PFC was further supported by a separate fMRI study where post-acute catatonia patients showed significantly lower emotion-induced activity in this region compared to psychiatric patients without catatonia with the same underlying diagnosis and healthy controls . Given that the orbitofrontal-ventromedial PFC is strongly involved in emotion processing, which is mediated by GABA activity, these findings provide further evidence for GABA-ergic mechanisms in catatonia including both GABA-A and GABA-B receptors . In terms of clinical findings, a double-blind RCT investigated the effect of the barbiturate derivative amobarbital in 1992, finding that of 10 patients randomised to the drug, 6 responded, compared to none of the 10 randomised to a saline infusion . However, barbiturate use has largely been abandoned since due to safety concerns . Acute catatonia often shows a rapid and dramatic response to benzodiazepines in case series and observational studies , although a Cochrane review found no placebo-controlled RCTs evaluating benzodiazepines in catatonia . reported 17 studies describing benzodiazepine use in patients with catatonia. Most used lorazepam 1–4 mg per day, with some using up to 16 mg per day. Some sources recommend a maximum dose of 24 mg and there are cases of such doses being helpful . Some studies have used other benzodiazepines, such as oxazepam, diazepam, clonazepam or flurazepam and a small RCT found no difference in outcome between lorazepam and oxazepam treatment . However, lorazepam is the most commonly used benzodiazepine for catatonia, it is available in several formulations and its use has a large amount of clinical experience, including at high doses. Administration can be oral, IM or IV . Parenteral administration can be particularly useful if oral administration is not possible, for example due to negativism. Lorazepam is usually administered in 2–4 divided doses each day . Reported response ranges from 66% up to 100% . These studies were mainly conducted in Western countries. Studies conducted in India and Asia show more variable response, ranging from 0% to 100% . The reason for these differences remains unclear, but it is possible that – given that lorazepam is unstable at room temperature – storage conditions may play a role. Usually, administration of lorazepam is well tolerated, and major side effects are rare. Even a dose as high as 16 mg of lorazepam is often well tolerated without sedation . Therapeutic response may entail partial or complete remission within hours, though it may sometimes take several days . The therapeutic response seems to be strongest in acute catatonia where the patient presents with a rapid-onset catatonic state . This is especially the case in patients suffering from bipolar disorder and major depressive disorder . In contrast, patients with chronic catatonia, especially in the context of schizophrenia, show a less strong response to lorazepam and are more likely to receive ECT . One important issue is the weaning of benzodiazepines. There is a need to balance the therapeutic benefits and the risks of withdrawal effects against dependence and the various risks of long-term benzodiazepine use . Withdrawal schedules for benzodiazepines exist, but these are generally designed for individuals who have been treated with benzodiazepines for months or years , whereas benzodiazepines in catatonia are often used for days or weeks. Nonetheless, such withdrawal schedules are associated with higher retention in treatment and better tolerability than abrupt discontinuation and the latter risks potentially fatal withdrawal seizures. In one case series of seven patients who had a relapse of their catatonia on withdrawal of lorazepam (the speed of withdrawal ranging from abrupt discontinuation to dose reduction by 1 mg per week), all had resolution of catatonia once lorazepam was restored to its previous dose and four were able to successfully wean off more slowly over 6 weeks, although three received long-term lorazepam treatment to prevent relapse . There are other reports of long-term benzodiazepines being used to prevent re-emergence of catatonia . Therefore, some form of taper seems reasonable and, in the event that catatonia re-emerges following benzodiazepine withdrawal, it is sensible to ensure that an underlying condition has been appropriately treated as well as undertaking a slower taper. Recommendations on the use of GABA-ergic medications in catatonia Where benzodiazepines are used for catatonia, available routes of administration may include oral, sublingual, IM and IV. The choice of route should be decided based on clinical appropriateness, rapidity of the required response, patient preference, local experience and availability. (S) Where benzodiazepines are used for catatonia, lorazepam is generally the preferred agent. (S) Where lorazepam is used for catatonia, high doses above the licensed maximum may be necessary to achieve maximal effect. An adequate trial may be considered complete when catatonia is adequately treated, titration has been stopped due to side effects or dose has reached at least 16 mg per day. (C) Benzodiazepines for catatonia should not be stopped abruptly but rather tapered down. The speed of the taper depends on a balance of the therapeutic benefits and the risks of withdrawal effects against the possibility of dependence and the risks of long-term harm from benzodiazepines. (S) If catatonia relapses on withdrawal of benzodiazepines, a clinician should ensure that any underlying condition has been adequately treated and a slower taper may be tried. (S) Electroconvulsive therapy The first patients treated with convulsive therapy, both for chemically induced seizures by Meduna in 1934 and for electrically induced seizures by Cerletti and Bini in 1938, had catatonic illnesses . Since then, governmental authorities, authors of textbooks on ECT or catatonia, and most publications discussing treatment options for catatonia endorse ECT, usually as the most effective treatment even where medications or other interventions have failed. For example, the United States FDA panel endorsed ECT for catatonia under a less restrictive Class 2 safety/efficacy designation and in the UK NICE recommends ECT for catatonia . In the UK and many other countries, there are specific legal requirements for administering ECT in a patient who is unable to consent. Despite this extensive clinical recognition in common practice, a rigorous base of high-quality published evidence is lacking. This deficiency of RCTs arises principally from practical difficulties in conducting sham or placebo treatment arms in people who are usually severely ill with catatonia and often lack an ability to participate in informed-consent processes for such clinical trials. Among several reviews of existing evidence on ECT for catatonia, the most recent comprehensive one was a meta-analysis . Three RCTs involving ECT for patients with catatonia have been conducted, all of which were in patients with primary psychotic disorders . Comparisons were between ECT and risperidone ; ECT, sham ECT and sodium thiopental ; and bifrontal ECT and bitemporal ECT . Two of the trials were conducted specifically in patients with catatonia , while one had a catatonic subgroup . Unfortunately, none of these contained both standardised ratings for outcome and quantitative results that would allow for statistical determinations of effect size . The review did, however, identify 10 studies with such data on quantitative outcomes, but they lacked control groups. Bilateral forms of ECT were the typical treatment modality. From these 10 studies, a meta-analysis showed a standardised mean difference between pre-post severity scores of −3.14, which represents a highly effective treatment. Reported side effects were similar to those seen generally in the use of ECT for depression. Since Leroy et al.’s review, four additional studies of ECT with pre/post-quantitative outcomes have been published . All were naturalistic case series or retrospective analyses, using Clinical Global Impression (CGI) or BFCRS quantitative outcomes. Results ranged from decreases in scores of 40% to 82%, and of response (final CGI ⩽2) rates from 83% to 90%. Pierson et al. studied adolescents ⩽18 years, reporting 90% met the CGI criteria for response . Most published reports describing ECT for catatonia have used bilateral forms of ECT , which are generally recommended for severe, medication-resistant or malignant forms of catatonia. No studies were found comparing bilateral versus unilateral ECT for catatonia. In terms of ECT sessions, most studies captured by Leroy et al.’s review that reported ECT frequency described ECT as taking place three times weekly, although this ranged between daily and twice weekly . Number of sessions ranged from 3 to 35 sessions with a mean of 9 sessions . There is a lack of data on the superiority of these differing protocols. Recommendations for the use of ECT in catatonia Where ECT is administered, bilateral ECT should be considered. (S) Where ECT is administered in acute catatonia, it should be given at least two times weekly. (S) Number of ECT sessions should be decided on the basis of treatment response, risks and side effects. (S) Other therapies While the majority of patients with catatonia respond robustly to benzodiazepines or ECT, some patients have a partial or non-response . Catatonia associated with schizophrenia may be less likely to respond to benzodiazepines . In addition, benzodiazepines and ECT are cautioned in some circumstances. There are also barriers to ECT use such as legal restrictions and stigma. These factors have prompted the trialling of several alternative agents, either as monotherapies or as augmentation strategies. The studies examining adjunctive medications for catatonia have consisted of prospective cohort studies, open prospective studies, prospective open label studies, retrospective chart review studies, case series and an open label double blind trial. NMDA receptor antagonists The NMDA receptor may be allosterically more available to glutamate in catatonia leading to dysfunction in cortico-striato-thalamo-cortical (CSTC) circuits. The NMDA receptor antagonists, amantadine and memantine, may reset the problems related to reduced dopamine and GABA in the CSTC circuitries by balancing NMDA receptor effects on PFC GABA-A parvalbumin interneurons that inhibit PFC pyramidal corticostriatal glutamatergic projections to the striatum while also reducing NMDA action in the striatum itself . Medications such as amantadine and memantine serve as uncompetitive antagonists of the NMDA receptor and thus may be helpful in patients with catatonia. Amantadine has the added theoretical benefit of enhancing central dopamine release and delaying dopamine reuptake from the synapse and since catatonia is hypothesised to be to some degree a disorder of hypodopaminergic tone, this profile may also benefit patients with catatonia . In their systematic review, reported on 11 articles that described the use of amantadine in 18 cases . Most patients had schizophrenia spectrum disorders, and some had medical comorbidities. Amantadine as monotherapy often abolished catatonia after a few doses. Five cases involved IV use and the others involved oral dosing. Oral doses ranged from 100 to 600 mg daily, with most patients receiving 200 mg daily. Daily IV doses ranged in 400–600 mg. In 2018, updated the cases and reported three more amantadine cases that used a mean oral dose of 306 (standard deviation (SD): 189) mg a day. In two of these cases, ECT was also used and in another one the results were equivocal. In a review by , seven further cases of catatonia, six of whom were diagnosed with schizophrenia, were treated successfully with oral amantadine 200 mg a day. Another patient with atypical psychosis with catatonia showed no improvement with amantadine, though upon removal of amantadine the condition worsened . In a clinical study of catatonia in neurologic and psychiatric patients in a tertiary neurological centre, 23 of 42 patients with catatonia related to a neurological disorder received adjunctive amantadine (mean dose 243 (SD: 57) mg/day) most often in addition to first-line oral lorazepam (mean dose 7.3 (SD: 2.8) mg/day) treatment. All patients achieved remission of their catatonia except for two patients who died of encephalitis or encephalomyelitis . In Beach et al.’s (2017) review, nine papers reported memantine treatment in nine cases . Again schizophrenia-spectrum illnesses were predominantly represented in this sample. Memantine was commonly prescribed as an adjunctive treatment in combination with benzodiazepines. added three unpublished memantine cases and reported the mean daily dose used for all 12 cases was 12.5 (SD: 6.2) mg. A few additional articles cite the benefits for catatonia of other medications that may act as glutamate antagonists. These include four cases of minocycline use and one case of dextromethorphan–quinidine use . In summary, reviews show that in 58 published cases plus other additional reports of amantadine and memantine use in catatonia of various aetiologies, substantial improvement was reported. This improvement usually occurred within a 7-day window . A bias towards the non-reporting of negative results must be considered, making the lack of RCTs and controlled studies an important shortcoming. Dopamine precursors, agonists and reuptake inhibitors The dopamine system modulates motivation and movement by informing the anterior cingulate cortex/mid-cingulate cortex when a task is associated with high predictive value (tonic dopamine) as well as when circumstances abruptly change to better or worse than predicted (phasic dopamine) . proposes that the dopamine system in the midbrain ventral tegmental area/substantia nigra functions as a manager of sorts for the CSTC circuits thought to be implicated in catatonia. The dopamine agonists and precursors can be hypothesised to treat catatonia by increasing dopamine modulation and by favouring the striosomal direct pathway as they do in akinetic mutism, leading to opening of the thalamic filter with feedforward activation of cortical regions including the supplementary motor area and primary motor cortex. Levodopa is a dopamine precursor that is often used in combination with a peripheral DOPA decarboxylase inhibitor (e.g. carbidopa and benserazide) in the treatment of Parkinson’s disease. A case report and small case series found marked improvement after treatment with levodopa, although the case series reported worsening of psychosis . Bromocriptine, a dopamine D 2 receptor agonist was used successfully in a 16-year-old girl with catatonia . There is also a literature on the use of dopamine agonists in the related conditions of NMS (see section ‘Neuroleptic malignant syndrome’) and akinetic mutism, a neurological condition associated with lesions to frontal-subcortical circuits . Methylphenidate is a noradrenaline and dopamine reuptake inhibitor. There have been five case reports of successful use of methylphenidate for catatonia . Most of these cases were due to mood disorders, and most used methylphenidate as monotherapy. Dopamine receptor antagonists and partial agonists The use of antipsychotics is one of the most controversial areas in catatonia management . Antipsychotic medications can induce catatonia (see section ‘Antipsychotic-induced catatonia’) and worsen it . Catatonia is also a risk factor for NMS , a severe antipsychotic-induced movement disorder. Moreover, in some studies of catatonia, the use of antipsychotics has been associated with poor outcomes . Nevertheless, dopamine receptor antagonists and partial agonists have been reported in some cases as beneficial in catatonia . This may particularly be the case in catatonic schizophrenia . There have been reports of the use of olanzapine , risperidone , ziprasidone , quetiapine and aripiprazole . Second-generation antipsychotics (SGAs) theoretically would be less likely to strongly antagonise dopamine receptors making them potentially less dangerous adjunctive treatments than first-generation antipsychotics (FGAs) in terms of NMS risk. Aripiprazole’s partial agonism might balance the dopaminergic effects and be of some benefit for catatonia . A recent Cochrane review found only one RCT of antipsychotics for schizophrenia spectrum disorders with catatonic features, and considered the evidence to be of very low quality due to a small sample size, short duration, risk of bias and other methodological issues . This RCT compared risperidone to ECT, finding greater improvement in the ECT-treated group . Given that D 2 receptor antagonists can worsen catatonia and trigger NMS in an at-risk group, some reviews have urged caution, especially in malignant catatonia . It has also been suggested that SGAs – or an FGA with weaker dopamine receptor affinity – should be preferred . Some sources suggest that antipsychotics should only be given in catatonia if co-administered with a benzodiazepine . One review concluded that there does not seem to be evidence to support the use of SGAs in patients with catatonia without an underlying psychosis . Two small studies have suggested that low serum iron in catatonia is associated with the development of NMS, leading some to suggest that serum iron may be used in catatonia to predict those who may develop NMS, but the evidence is not of a high quality . Regarding clozapine, a systematic review found there is some evidence from case reports and small uncontrolled observational studies that clozapine may be effective in catatonic schizophrenia . In the largest identified study, 55 patients with catatonic schizophrenia received clozapine, resulting in 2 cases of complete remission, 48 cases of partial remission and 5 cases of no remission . Where catatonia occurs in the context of clozapine withdrawal, a systematic review of case reports found that re-initiation of clozapine or the use of ECT was usually effective, while benzodiazepines were less reliable . Anticonvulsants Leaving aside the cases where catatonia is a presentation of NCSE , catatonia has occasionally been treated with anticonvulsant medications. Evidence consists of case series and case reports. Three articles have reported using carbamazepine to treat catatonia in seven cases . Most cases were associated with a mood disorder, and carbamazepine was found to be effective without the need for benzodiazepines. Doses ranged from 100 to 1000 mg daily, with six cases receiving 600 mg daily or greater. Valproic acid use in catatonia has been reported in four papers in which five patients were suffering with psychoses, mostly schizophrenia spectrum in nature. In three instances, excited catatonia was noted as part of the presentation. These patients were treated successfully with valproic acid . Doses ranged from 600 to 4000 mg daily. Another case series involving four cases highlighted the benefits of topiramate in the treatment of catatonia . Here too most of these patients had been diagnosed with schizophrenia-like illnesses. Topiramate was used as an adjunctive treatment along with a benzodiazepine. All four cases improved on 200 mg daily. Phenytoin has been reported to be effective in cases where catatonia has appeared in the context of bacterial meningoencephalitis, NCSE and frontal lobe seizures . Levetiracetam and zonisamide have each been used in one case along with aripiprazole . Anticholinergic agents Two case reports described using benztropine IV as monotherapy to treat catatonia in two cases . In another case, trihexyphenidyl was used in combination with clozapine to treat catatonia . All patients had a schizophrenia-spectrum illness. And in a fourth case, several medications including trihexyphenidyl were used to treat catatonia in a young woman with Wilson’s disease . Miscellaneous treatments Muscle relaxants, calcium channel blockers and corticosteroids have all anecdotally been associated with improvement in isolated patients with catatonia . Lithium and other treatments for prophylaxis in periodic catatonia warrant particular attention and are considered in section ‘Periodic catatonia’. Repetitive transcranial magnetic stimulation and transcranial direct-current stimulation as alternatives to ECT There are conditions and situations that discourage the use of ECT after non-response to benzodiazepines and second-line agents, and when maintenance ECT is required that offers a potential niche for newer neuromodulatory treatments such as repetitive transcranial magnetic stimulation (rTMS) and transcranial direct-current stimulation (tDCS) for the treatment of catatonia. Two systematic reviews have covered this topic and found that the majority of case reports and case series in the literature reported a positive response . rTMS over the bilateral dorsolateral PFC has been particularly emphasised . Adverse effects appear to be minimal . Recommendations on the use of other therapies Where first-line therapies for catatonia are unavailable, cautioned, ineffective or only partially effective, consider a trial of an NMDA receptor antagonist, either amantadine or memantine. (C) Where first-line therapies and NMDA receptor antagonists are unavailable, cautioned, ineffective or only partially effective, consider a trial of levodopa, a dopamine agonist, carbamazepine, valproate, topiramate or a SGA. (D) Antipsychotic medications should be avoided where there is no underlying psychotic disorder. (C) Where catatonia exists in the context of an underlying psychotic disorder, if antipsychotic medications are used, they should be prescribed with caution after an evaluation of the potential benefits and risks, including the risk of NMS. Additional caution should be exercised if there is low serum iron or a prior history of NMS. If antipsychotic medications are used, a SGA should be used with gradual titration, and co-administration of a benzodiazepine should be considered. (S) Where ECT is indicated but unavailable, consider treatment with rTMS or tDCS. (D) The evidence base for the treatment of catatonia is not extensive. Several RCTs have been conducted, but they have usually been at high risk of bias, inadequately reported, using outdated treatments or applicable to only a small subset of patients with catatonia . One systematic review found only four studies that had more than 50 participants . Nonetheless, where there is converging evidence from multiple sources, some clinically relevant inferences can be made. Many treatments for catatonia are unlicensed applications for licensed medicines. Relevant guidance on this issue has been produced by the General Medical Council, the Royal College of Psychiatrists in association with the BAP, and the Royal College of Paediatrics and Child Health . While this guidance recommends that prescribing should usually be within a product’s licence, it is recognised that there are situations in which prescribing off-licence is appropriate. Beyond the common standards for good prescribing, it is advised to use licensed medications in preference where appropriate, to be familiar and satisfied with evidence for safety and efficacy, to seek advice where necessary, giving sufficient information to patients, to inform patients that a medicine is being used outside its licence, to take consent or to document where this is not possible, to start at a low dose and to inform other professionals that the medicine is being used off-licence. There are two distinct aspects to treating catatonia: specific treatments for catatonia per se and treatments for the disorder(s) underlying catatonia, where identified. While employing either one of these approaches may be effective in some cases, there are many cases where using either one of these strategies alone fails but using the other or a combination of the two is successful . In addition, consideration must be given to the prevention and management of the medical complications of catatonia. First-line treatment Several studies have found that response to catatonia treatment is more likely or more rapid in patients with a shorter duration of illness , although this has not universally been the case . Given this preponderance of evidence and the likely explanation that catatonia becomes less treatment-responsive with time, we recommend treating catatonia as soon as possible after its identification. In terms of first-line treatments, there is most evidence for benzodiazepines and ECT . We provide more detail about these treatments in sections ‘GABA-ergic pharmacotherapies’ and ‘Electroconvulsive therapy’, but here we consider the question of which to use as first-line therapy. Response rates are similar: 59–100% for ECT and 66–100% in Western studies of benzodiazepines (although some Asian studies found lower response rates) . If one treatment is contraindicated, this makes the decision simpler. Beyond this, consideration should be given to the potential of ECT to ameliorate a disorder underlying the catatonia (NICE recommends ECT for severe depression and prolonged or severe mania in certain circumstances; , ), balancing the side effects of ECT (particularly the small risk associated with a general anaesthetic, risk of status epilepticus, post-ictal confusion and autobiographical memory loss) and the side effects of benzodiazepines (particularly respiratory depression, sedation and amnesia). Other considerations more specific to ECT include often limited availability, delays in accessing care, legal issues obtaining consent and patient preferences. There are several studies of ECT after benzodiazepines have been ineffective, reporting high response rates . There is a case series and uncontrolled cohort study suggesting that the combination of benzodiazepines and ECT may be effective . There are several special cases to these recommendations about first-line treatment, which are as follows: Clozapine-withdrawal catatonia: a systematic review of case reports found that restarting clozapine or using ECT were the most effective treatment strategies, while benzodiazepines were less effective . Benzodiazepine-withdrawal catatonia: a systematic review of case reports found that reinstating benzodiazepines was generally effective . Catatonia in autism spectrum disorder: see section ‘Autism spectrum disorder’. Chronic, milder catatonia in the context of schizophrenia: there is some evidence that this tends not to respond to benzodiazepines or ECT . There is some evidence based on observational data that these patients may respond to clozapine . There have been rare cases of cardiorespiratory arrest associated with the concomitant use of clozapine and benzodiazepines , so caution should be exercised if there is co-administration. Malignant catatonia: see section ‘Periodic catatonia’. NMS: see section ‘Neuroleptic malignant syndrome’. Antipsychotic-induced catatonia: see section ‘Antipsychotic-induced catatonia’. Women in the perinatal period: see section ‘The perinatal period’. Non-response Where benzodiazepines or ECT do not succeed in achieving remission of catatonia, it is important to re-evaluate the diagnosis. In one study of 21 patients who entered an RCT for catatonia, 2 of the non-responders were subsequently diagnosed with Parkinson’s disease . For alternative treatment approaches, see section ‘Other therapies’. Underlying condition Alongside treating the catatonia, it is important to treat any underlying disorder. This may involve psychotropic medications (e.g. antidepressants), other medical therapies (e.g. antibiotics, immunosuppressants) or even occasionally surgical treatments (e.g. tumour resection in the case of a paraneoplastic syndrome). Guidelines for treating relevant psychiatric disorders are available from the BAP . There is some controversy over the use of antipsychotic medications in catatonia, which is discussed in section ‘Dopamine receptor antagonists and partial agonists’. Complications Some, though not all, studies have associated catatonia with an increased mortality . There is an extensive case report literature on the medical complications of catatonia and a large cohort study of patients with schizophrenia found that those with catatonic stupor had an increased risk of various infections (pneumonia, urinary tract infection and sepsis), disseminated intravascular coagulation, rhabdomyolysis, dehydration, deep vein thrombosis, pulmonary embolus, urinary retention, decubitus ulcers, cardiac arrhythmia, renal failure, NMS, hypernatraemia and liver dysfunction . Guidance has been developed for averting such complications, which include recommendations such as pharmacological thromboprophylaxis, frequent assessment of pressure areas, stretching to avoid muscle contractures and consideration of artificial feeding . Recommendations on the general approach to treating catatonia Treatment for catatonia should be instituted quickly after identification of catatonia and it is not always necessary to await results of all investigations before commencing treatment. (D) Prescribing outside of a product licence is often justified in catatonia, but where a prescriber does this, they should take particular care to provide information to the patient or carer and obtain consent, where possible, taking advice where necessary. (S) Catatonia treatment should consist of specific treatment for the catatonia, treatment of any underlying disorder and prevention and management of complications of catatonia. (S) First-line treatment for catatonia should usually consist of a trial of benzodiazepines and/or ECT, (C) but see references to special cases in ‘First-line treatment’ and below. ECT should be available in any settings where catatonia may be treated, including in psychiatric and general hospitals. (S) When deciding between benzodiazepines and ECT as a first-line treatment, consider the following factors: side effect profile, whether there is an underlying disorder that is likely to be responsive to ECT (such as depression or mania) and availability of ECT. (S) Where benzodiazepines have not resulted in remission, ECT should be used. (B) For details of what an adequate trial of benzodiazepines consists of, see section ‘GABA-ergic pharmacotherapies’. Where catatonia has resulted from clozapine withdrawal, restart clozapine if possible and, if necessary, use ECT. (D) Where catatonia has resulted from benzodiazepine withdrawal, restart a benzodiazepine. (D) If catatonia is chronic and mild in the context of schizophrenia, consider a trial of clozapine. (C) If clozapine and benzodiazepines are administered concomitantly, titrate slowly and closely monitor vital signs. (S) Where catatonia does not respond to first-line therapy, re-evaluate the diagnosis. (D) Several studies have found that response to catatonia treatment is more likely or more rapid in patients with a shorter duration of illness , although this has not universally been the case . Given this preponderance of evidence and the likely explanation that catatonia becomes less treatment-responsive with time, we recommend treating catatonia as soon as possible after its identification. In terms of first-line treatments, there is most evidence for benzodiazepines and ECT . We provide more detail about these treatments in sections ‘GABA-ergic pharmacotherapies’ and ‘Electroconvulsive therapy’, but here we consider the question of which to use as first-line therapy. Response rates are similar: 59–100% for ECT and 66–100% in Western studies of benzodiazepines (although some Asian studies found lower response rates) . If one treatment is contraindicated, this makes the decision simpler. Beyond this, consideration should be given to the potential of ECT to ameliorate a disorder underlying the catatonia (NICE recommends ECT for severe depression and prolonged or severe mania in certain circumstances; , ), balancing the side effects of ECT (particularly the small risk associated with a general anaesthetic, risk of status epilepticus, post-ictal confusion and autobiographical memory loss) and the side effects of benzodiazepines (particularly respiratory depression, sedation and amnesia). Other considerations more specific to ECT include often limited availability, delays in accessing care, legal issues obtaining consent and patient preferences. There are several studies of ECT after benzodiazepines have been ineffective, reporting high response rates . There is a case series and uncontrolled cohort study suggesting that the combination of benzodiazepines and ECT may be effective . There are several special cases to these recommendations about first-line treatment, which are as follows: Clozapine-withdrawal catatonia: a systematic review of case reports found that restarting clozapine or using ECT were the most effective treatment strategies, while benzodiazepines were less effective . Benzodiazepine-withdrawal catatonia: a systematic review of case reports found that reinstating benzodiazepines was generally effective . Catatonia in autism spectrum disorder: see section ‘Autism spectrum disorder’. Chronic, milder catatonia in the context of schizophrenia: there is some evidence that this tends not to respond to benzodiazepines or ECT . There is some evidence based on observational data that these patients may respond to clozapine . There have been rare cases of cardiorespiratory arrest associated with the concomitant use of clozapine and benzodiazepines , so caution should be exercised if there is co-administration. Malignant catatonia: see section ‘Periodic catatonia’. NMS: see section ‘Neuroleptic malignant syndrome’. Antipsychotic-induced catatonia: see section ‘Antipsychotic-induced catatonia’. Women in the perinatal period: see section ‘The perinatal period’. Where benzodiazepines or ECT do not succeed in achieving remission of catatonia, it is important to re-evaluate the diagnosis. In one study of 21 patients who entered an RCT for catatonia, 2 of the non-responders were subsequently diagnosed with Parkinson’s disease . For alternative treatment approaches, see section ‘Other therapies’. Alongside treating the catatonia, it is important to treat any underlying disorder. This may involve psychotropic medications (e.g. antidepressants), other medical therapies (e.g. antibiotics, immunosuppressants) or even occasionally surgical treatments (e.g. tumour resection in the case of a paraneoplastic syndrome). Guidelines for treating relevant psychiatric disorders are available from the BAP . There is some controversy over the use of antipsychotic medications in catatonia, which is discussed in section ‘Dopamine receptor antagonists and partial agonists’. Some, though not all, studies have associated catatonia with an increased mortality . There is an extensive case report literature on the medical complications of catatonia and a large cohort study of patients with schizophrenia found that those with catatonic stupor had an increased risk of various infections (pneumonia, urinary tract infection and sepsis), disseminated intravascular coagulation, rhabdomyolysis, dehydration, deep vein thrombosis, pulmonary embolus, urinary retention, decubitus ulcers, cardiac arrhythmia, renal failure, NMS, hypernatraemia and liver dysfunction . Guidance has been developed for averting such complications, which include recommendations such as pharmacological thromboprophylaxis, frequent assessment of pressure areas, stretching to avoid muscle contractures and consideration of artificial feeding . Recommendations on the general approach to treating catatonia Treatment for catatonia should be instituted quickly after identification of catatonia and it is not always necessary to await results of all investigations before commencing treatment. (D) Prescribing outside of a product licence is often justified in catatonia, but where a prescriber does this, they should take particular care to provide information to the patient or carer and obtain consent, where possible, taking advice where necessary. (S) Catatonia treatment should consist of specific treatment for the catatonia, treatment of any underlying disorder and prevention and management of complications of catatonia. (S) First-line treatment for catatonia should usually consist of a trial of benzodiazepines and/or ECT, (C) but see references to special cases in ‘First-line treatment’ and below. ECT should be available in any settings where catatonia may be treated, including in psychiatric and general hospitals. (S) When deciding between benzodiazepines and ECT as a first-line treatment, consider the following factors: side effect profile, whether there is an underlying disorder that is likely to be responsive to ECT (such as depression or mania) and availability of ECT. (S) Where benzodiazepines have not resulted in remission, ECT should be used. (B) For details of what an adequate trial of benzodiazepines consists of, see section ‘GABA-ergic pharmacotherapies’. Where catatonia has resulted from clozapine withdrawal, restart clozapine if possible and, if necessary, use ECT. (D) Where catatonia has resulted from benzodiazepine withdrawal, restart a benzodiazepine. (D) If catatonia is chronic and mild in the context of schizophrenia, consider a trial of clozapine. (C) If clozapine and benzodiazepines are administered concomitantly, titrate slowly and closely monitor vital signs. (S) Where catatonia does not respond to first-line therapy, re-evaluate the diagnosis. (D) Evidence for pharmacotherapies for catatonia that augment GABA-ergic signalling pathways is supported by neuroimaging studies. conducted an iomazenil GABA-SPECT study and found that patients with catatonia (in a post-acute state) showed significantly lower iomazenil binding in the sensorimotor cortex as well as in the parietal cortex and prefrontal cortex (PFC). The same group was followed up in post-acute catatonia with a subsequent functional MRI (fMRI) study where emotional stimulation was applied before and after lorazepam administration: the orbitofrontal-ventromedial PFC was particularly responsive to a lorazepam challenge, normalising its activity . The involvement of the orbitofrontal-ventromedial PFC was further supported by a separate fMRI study where post-acute catatonia patients showed significantly lower emotion-induced activity in this region compared to psychiatric patients without catatonia with the same underlying diagnosis and healthy controls . Given that the orbitofrontal-ventromedial PFC is strongly involved in emotion processing, which is mediated by GABA activity, these findings provide further evidence for GABA-ergic mechanisms in catatonia including both GABA-A and GABA-B receptors . In terms of clinical findings, a double-blind RCT investigated the effect of the barbiturate derivative amobarbital in 1992, finding that of 10 patients randomised to the drug, 6 responded, compared to none of the 10 randomised to a saline infusion . However, barbiturate use has largely been abandoned since due to safety concerns . Acute catatonia often shows a rapid and dramatic response to benzodiazepines in case series and observational studies , although a Cochrane review found no placebo-controlled RCTs evaluating benzodiazepines in catatonia . reported 17 studies describing benzodiazepine use in patients with catatonia. Most used lorazepam 1–4 mg per day, with some using up to 16 mg per day. Some sources recommend a maximum dose of 24 mg and there are cases of such doses being helpful . Some studies have used other benzodiazepines, such as oxazepam, diazepam, clonazepam or flurazepam and a small RCT found no difference in outcome between lorazepam and oxazepam treatment . However, lorazepam is the most commonly used benzodiazepine for catatonia, it is available in several formulations and its use has a large amount of clinical experience, including at high doses. Administration can be oral, IM or IV . Parenteral administration can be particularly useful if oral administration is not possible, for example due to negativism. Lorazepam is usually administered in 2–4 divided doses each day . Reported response ranges from 66% up to 100% . These studies were mainly conducted in Western countries. Studies conducted in India and Asia show more variable response, ranging from 0% to 100% . The reason for these differences remains unclear, but it is possible that – given that lorazepam is unstable at room temperature – storage conditions may play a role. Usually, administration of lorazepam is well tolerated, and major side effects are rare. Even a dose as high as 16 mg of lorazepam is often well tolerated without sedation . Therapeutic response may entail partial or complete remission within hours, though it may sometimes take several days . The therapeutic response seems to be strongest in acute catatonia where the patient presents with a rapid-onset catatonic state . This is especially the case in patients suffering from bipolar disorder and major depressive disorder . In contrast, patients with chronic catatonia, especially in the context of schizophrenia, show a less strong response to lorazepam and are more likely to receive ECT . One important issue is the weaning of benzodiazepines. There is a need to balance the therapeutic benefits and the risks of withdrawal effects against dependence and the various risks of long-term benzodiazepine use . Withdrawal schedules for benzodiazepines exist, but these are generally designed for individuals who have been treated with benzodiazepines for months or years , whereas benzodiazepines in catatonia are often used for days or weeks. Nonetheless, such withdrawal schedules are associated with higher retention in treatment and better tolerability than abrupt discontinuation and the latter risks potentially fatal withdrawal seizures. In one case series of seven patients who had a relapse of their catatonia on withdrawal of lorazepam (the speed of withdrawal ranging from abrupt discontinuation to dose reduction by 1 mg per week), all had resolution of catatonia once lorazepam was restored to its previous dose and four were able to successfully wean off more slowly over 6 weeks, although three received long-term lorazepam treatment to prevent relapse . There are other reports of long-term benzodiazepines being used to prevent re-emergence of catatonia . Therefore, some form of taper seems reasonable and, in the event that catatonia re-emerges following benzodiazepine withdrawal, it is sensible to ensure that an underlying condition has been appropriately treated as well as undertaking a slower taper. Recommendations on the use of GABA-ergic medications in catatonia Where benzodiazepines are used for catatonia, available routes of administration may include oral, sublingual, IM and IV. The choice of route should be decided based on clinical appropriateness, rapidity of the required response, patient preference, local experience and availability. (S) Where benzodiazepines are used for catatonia, lorazepam is generally the preferred agent. (S) Where lorazepam is used for catatonia, high doses above the licensed maximum may be necessary to achieve maximal effect. An adequate trial may be considered complete when catatonia is adequately treated, titration has been stopped due to side effects or dose has reached at least 16 mg per day. (C) Benzodiazepines for catatonia should not be stopped abruptly but rather tapered down. The speed of the taper depends on a balance of the therapeutic benefits and the risks of withdrawal effects against the possibility of dependence and the risks of long-term harm from benzodiazepines. (S) If catatonia relapses on withdrawal of benzodiazepines, a clinician should ensure that any underlying condition has been adequately treated and a slower taper may be tried. (S) The first patients treated with convulsive therapy, both for chemically induced seizures by Meduna in 1934 and for electrically induced seizures by Cerletti and Bini in 1938, had catatonic illnesses . Since then, governmental authorities, authors of textbooks on ECT or catatonia, and most publications discussing treatment options for catatonia endorse ECT, usually as the most effective treatment even where medications or other interventions have failed. For example, the United States FDA panel endorsed ECT for catatonia under a less restrictive Class 2 safety/efficacy designation and in the UK NICE recommends ECT for catatonia . In the UK and many other countries, there are specific legal requirements for administering ECT in a patient who is unable to consent. Despite this extensive clinical recognition in common practice, a rigorous base of high-quality published evidence is lacking. This deficiency of RCTs arises principally from practical difficulties in conducting sham or placebo treatment arms in people who are usually severely ill with catatonia and often lack an ability to participate in informed-consent processes for such clinical trials. Among several reviews of existing evidence on ECT for catatonia, the most recent comprehensive one was a meta-analysis . Three RCTs involving ECT for patients with catatonia have been conducted, all of which were in patients with primary psychotic disorders . Comparisons were between ECT and risperidone ; ECT, sham ECT and sodium thiopental ; and bifrontal ECT and bitemporal ECT . Two of the trials were conducted specifically in patients with catatonia , while one had a catatonic subgroup . Unfortunately, none of these contained both standardised ratings for outcome and quantitative results that would allow for statistical determinations of effect size . The review did, however, identify 10 studies with such data on quantitative outcomes, but they lacked control groups. Bilateral forms of ECT were the typical treatment modality. From these 10 studies, a meta-analysis showed a standardised mean difference between pre-post severity scores of −3.14, which represents a highly effective treatment. Reported side effects were similar to those seen generally in the use of ECT for depression. Since Leroy et al.’s review, four additional studies of ECT with pre/post-quantitative outcomes have been published . All were naturalistic case series or retrospective analyses, using Clinical Global Impression (CGI) or BFCRS quantitative outcomes. Results ranged from decreases in scores of 40% to 82%, and of response (final CGI ⩽2) rates from 83% to 90%. Pierson et al. studied adolescents ⩽18 years, reporting 90% met the CGI criteria for response . Most published reports describing ECT for catatonia have used bilateral forms of ECT , which are generally recommended for severe, medication-resistant or malignant forms of catatonia. No studies were found comparing bilateral versus unilateral ECT for catatonia. In terms of ECT sessions, most studies captured by Leroy et al.’s review that reported ECT frequency described ECT as taking place three times weekly, although this ranged between daily and twice weekly . Number of sessions ranged from 3 to 35 sessions with a mean of 9 sessions . There is a lack of data on the superiority of these differing protocols. Recommendations for the use of ECT in catatonia Where ECT is administered, bilateral ECT should be considered. (S) Where ECT is administered in acute catatonia, it should be given at least two times weekly. (S) Number of ECT sessions should be decided on the basis of treatment response, risks and side effects. (S) While the majority of patients with catatonia respond robustly to benzodiazepines or ECT, some patients have a partial or non-response . Catatonia associated with schizophrenia may be less likely to respond to benzodiazepines . In addition, benzodiazepines and ECT are cautioned in some circumstances. There are also barriers to ECT use such as legal restrictions and stigma. These factors have prompted the trialling of several alternative agents, either as monotherapies or as augmentation strategies. The studies examining adjunctive medications for catatonia have consisted of prospective cohort studies, open prospective studies, prospective open label studies, retrospective chart review studies, case series and an open label double blind trial. NMDA receptor antagonists The NMDA receptor may be allosterically more available to glutamate in catatonia leading to dysfunction in cortico-striato-thalamo-cortical (CSTC) circuits. The NMDA receptor antagonists, amantadine and memantine, may reset the problems related to reduced dopamine and GABA in the CSTC circuitries by balancing NMDA receptor effects on PFC GABA-A parvalbumin interneurons that inhibit PFC pyramidal corticostriatal glutamatergic projections to the striatum while also reducing NMDA action in the striatum itself . Medications such as amantadine and memantine serve as uncompetitive antagonists of the NMDA receptor and thus may be helpful in patients with catatonia. Amantadine has the added theoretical benefit of enhancing central dopamine release and delaying dopamine reuptake from the synapse and since catatonia is hypothesised to be to some degree a disorder of hypodopaminergic tone, this profile may also benefit patients with catatonia . In their systematic review, reported on 11 articles that described the use of amantadine in 18 cases . Most patients had schizophrenia spectrum disorders, and some had medical comorbidities. Amantadine as monotherapy often abolished catatonia after a few doses. Five cases involved IV use and the others involved oral dosing. Oral doses ranged from 100 to 600 mg daily, with most patients receiving 200 mg daily. Daily IV doses ranged in 400–600 mg. In 2018, updated the cases and reported three more amantadine cases that used a mean oral dose of 306 (standard deviation (SD): 189) mg a day. In two of these cases, ECT was also used and in another one the results were equivocal. In a review by , seven further cases of catatonia, six of whom were diagnosed with schizophrenia, were treated successfully with oral amantadine 200 mg a day. Another patient with atypical psychosis with catatonia showed no improvement with amantadine, though upon removal of amantadine the condition worsened . In a clinical study of catatonia in neurologic and psychiatric patients in a tertiary neurological centre, 23 of 42 patients with catatonia related to a neurological disorder received adjunctive amantadine (mean dose 243 (SD: 57) mg/day) most often in addition to first-line oral lorazepam (mean dose 7.3 (SD: 2.8) mg/day) treatment. All patients achieved remission of their catatonia except for two patients who died of encephalitis or encephalomyelitis . In Beach et al.’s (2017) review, nine papers reported memantine treatment in nine cases . Again schizophrenia-spectrum illnesses were predominantly represented in this sample. Memantine was commonly prescribed as an adjunctive treatment in combination with benzodiazepines. added three unpublished memantine cases and reported the mean daily dose used for all 12 cases was 12.5 (SD: 6.2) mg. A few additional articles cite the benefits for catatonia of other medications that may act as glutamate antagonists. These include four cases of minocycline use and one case of dextromethorphan–quinidine use . In summary, reviews show that in 58 published cases plus other additional reports of amantadine and memantine use in catatonia of various aetiologies, substantial improvement was reported. This improvement usually occurred within a 7-day window . A bias towards the non-reporting of negative results must be considered, making the lack of RCTs and controlled studies an important shortcoming. Dopamine precursors, agonists and reuptake inhibitors The dopamine system modulates motivation and movement by informing the anterior cingulate cortex/mid-cingulate cortex when a task is associated with high predictive value (tonic dopamine) as well as when circumstances abruptly change to better or worse than predicted (phasic dopamine) . proposes that the dopamine system in the midbrain ventral tegmental area/substantia nigra functions as a manager of sorts for the CSTC circuits thought to be implicated in catatonia. The dopamine agonists and precursors can be hypothesised to treat catatonia by increasing dopamine modulation and by favouring the striosomal direct pathway as they do in akinetic mutism, leading to opening of the thalamic filter with feedforward activation of cortical regions including the supplementary motor area and primary motor cortex. Levodopa is a dopamine precursor that is often used in combination with a peripheral DOPA decarboxylase inhibitor (e.g. carbidopa and benserazide) in the treatment of Parkinson’s disease. A case report and small case series found marked improvement after treatment with levodopa, although the case series reported worsening of psychosis . Bromocriptine, a dopamine D 2 receptor agonist was used successfully in a 16-year-old girl with catatonia . There is also a literature on the use of dopamine agonists in the related conditions of NMS (see section ‘Neuroleptic malignant syndrome’) and akinetic mutism, a neurological condition associated with lesions to frontal-subcortical circuits . Methylphenidate is a noradrenaline and dopamine reuptake inhibitor. There have been five case reports of successful use of methylphenidate for catatonia . Most of these cases were due to mood disorders, and most used methylphenidate as monotherapy. Dopamine receptor antagonists and partial agonists The use of antipsychotics is one of the most controversial areas in catatonia management . Antipsychotic medications can induce catatonia (see section ‘Antipsychotic-induced catatonia’) and worsen it . Catatonia is also a risk factor for NMS , a severe antipsychotic-induced movement disorder. Moreover, in some studies of catatonia, the use of antipsychotics has been associated with poor outcomes . Nevertheless, dopamine receptor antagonists and partial agonists have been reported in some cases as beneficial in catatonia . This may particularly be the case in catatonic schizophrenia . There have been reports of the use of olanzapine , risperidone , ziprasidone , quetiapine and aripiprazole . Second-generation antipsychotics (SGAs) theoretically would be less likely to strongly antagonise dopamine receptors making them potentially less dangerous adjunctive treatments than first-generation antipsychotics (FGAs) in terms of NMS risk. Aripiprazole’s partial agonism might balance the dopaminergic effects and be of some benefit for catatonia . A recent Cochrane review found only one RCT of antipsychotics for schizophrenia spectrum disorders with catatonic features, and considered the evidence to be of very low quality due to a small sample size, short duration, risk of bias and other methodological issues . This RCT compared risperidone to ECT, finding greater improvement in the ECT-treated group . Given that D 2 receptor antagonists can worsen catatonia and trigger NMS in an at-risk group, some reviews have urged caution, especially in malignant catatonia . It has also been suggested that SGAs – or an FGA with weaker dopamine receptor affinity – should be preferred . Some sources suggest that antipsychotics should only be given in catatonia if co-administered with a benzodiazepine . One review concluded that there does not seem to be evidence to support the use of SGAs in patients with catatonia without an underlying psychosis . Two small studies have suggested that low serum iron in catatonia is associated with the development of NMS, leading some to suggest that serum iron may be used in catatonia to predict those who may develop NMS, but the evidence is not of a high quality . Regarding clozapine, a systematic review found there is some evidence from case reports and small uncontrolled observational studies that clozapine may be effective in catatonic schizophrenia . In the largest identified study, 55 patients with catatonic schizophrenia received clozapine, resulting in 2 cases of complete remission, 48 cases of partial remission and 5 cases of no remission . Where catatonia occurs in the context of clozapine withdrawal, a systematic review of case reports found that re-initiation of clozapine or the use of ECT was usually effective, while benzodiazepines were less reliable . Anticonvulsants Leaving aside the cases where catatonia is a presentation of NCSE , catatonia has occasionally been treated with anticonvulsant medications. Evidence consists of case series and case reports. Three articles have reported using carbamazepine to treat catatonia in seven cases . Most cases were associated with a mood disorder, and carbamazepine was found to be effective without the need for benzodiazepines. Doses ranged from 100 to 1000 mg daily, with six cases receiving 600 mg daily or greater. Valproic acid use in catatonia has been reported in four papers in which five patients were suffering with psychoses, mostly schizophrenia spectrum in nature. In three instances, excited catatonia was noted as part of the presentation. These patients were treated successfully with valproic acid . Doses ranged from 600 to 4000 mg daily. Another case series involving four cases highlighted the benefits of topiramate in the treatment of catatonia . Here too most of these patients had been diagnosed with schizophrenia-like illnesses. Topiramate was used as an adjunctive treatment along with a benzodiazepine. All four cases improved on 200 mg daily. Phenytoin has been reported to be effective in cases where catatonia has appeared in the context of bacterial meningoencephalitis, NCSE and frontal lobe seizures . Levetiracetam and zonisamide have each been used in one case along with aripiprazole . Anticholinergic agents Two case reports described using benztropine IV as monotherapy to treat catatonia in two cases . In another case, trihexyphenidyl was used in combination with clozapine to treat catatonia . All patients had a schizophrenia-spectrum illness. And in a fourth case, several medications including trihexyphenidyl were used to treat catatonia in a young woman with Wilson’s disease . Miscellaneous treatments Muscle relaxants, calcium channel blockers and corticosteroids have all anecdotally been associated with improvement in isolated patients with catatonia . Lithium and other treatments for prophylaxis in periodic catatonia warrant particular attention and are considered in section ‘Periodic catatonia’. Repetitive transcranial magnetic stimulation and transcranial direct-current stimulation as alternatives to ECT There are conditions and situations that discourage the use of ECT after non-response to benzodiazepines and second-line agents, and when maintenance ECT is required that offers a potential niche for newer neuromodulatory treatments such as repetitive transcranial magnetic stimulation (rTMS) and transcranial direct-current stimulation (tDCS) for the treatment of catatonia. Two systematic reviews have covered this topic and found that the majority of case reports and case series in the literature reported a positive response . rTMS over the bilateral dorsolateral PFC has been particularly emphasised . Adverse effects appear to be minimal . Recommendations on the use of other therapies Where first-line therapies for catatonia are unavailable, cautioned, ineffective or only partially effective, consider a trial of an NMDA receptor antagonist, either amantadine or memantine. (C) Where first-line therapies and NMDA receptor antagonists are unavailable, cautioned, ineffective or only partially effective, consider a trial of levodopa, a dopamine agonist, carbamazepine, valproate, topiramate or a SGA. (D) Antipsychotic medications should be avoided where there is no underlying psychotic disorder. (C) Where catatonia exists in the context of an underlying psychotic disorder, if antipsychotic medications are used, they should be prescribed with caution after an evaluation of the potential benefits and risks, including the risk of NMS. Additional caution should be exercised if there is low serum iron or a prior history of NMS. If antipsychotic medications are used, a SGA should be used with gradual titration, and co-administration of a benzodiazepine should be considered. (S) Where ECT is indicated but unavailable, consider treatment with rTMS or tDCS. (D) The NMDA receptor may be allosterically more available to glutamate in catatonia leading to dysfunction in cortico-striato-thalamo-cortical (CSTC) circuits. The NMDA receptor antagonists, amantadine and memantine, may reset the problems related to reduced dopamine and GABA in the CSTC circuitries by balancing NMDA receptor effects on PFC GABA-A parvalbumin interneurons that inhibit PFC pyramidal corticostriatal glutamatergic projections to the striatum while also reducing NMDA action in the striatum itself . Medications such as amantadine and memantine serve as uncompetitive antagonists of the NMDA receptor and thus may be helpful in patients with catatonia. Amantadine has the added theoretical benefit of enhancing central dopamine release and delaying dopamine reuptake from the synapse and since catatonia is hypothesised to be to some degree a disorder of hypodopaminergic tone, this profile may also benefit patients with catatonia . In their systematic review, reported on 11 articles that described the use of amantadine in 18 cases . Most patients had schizophrenia spectrum disorders, and some had medical comorbidities. Amantadine as monotherapy often abolished catatonia after a few doses. Five cases involved IV use and the others involved oral dosing. Oral doses ranged from 100 to 600 mg daily, with most patients receiving 200 mg daily. Daily IV doses ranged in 400–600 mg. In 2018, updated the cases and reported three more amantadine cases that used a mean oral dose of 306 (standard deviation (SD): 189) mg a day. In two of these cases, ECT was also used and in another one the results were equivocal. In a review by , seven further cases of catatonia, six of whom were diagnosed with schizophrenia, were treated successfully with oral amantadine 200 mg a day. Another patient with atypical psychosis with catatonia showed no improvement with amantadine, though upon removal of amantadine the condition worsened . In a clinical study of catatonia in neurologic and psychiatric patients in a tertiary neurological centre, 23 of 42 patients with catatonia related to a neurological disorder received adjunctive amantadine (mean dose 243 (SD: 57) mg/day) most often in addition to first-line oral lorazepam (mean dose 7.3 (SD: 2.8) mg/day) treatment. All patients achieved remission of their catatonia except for two patients who died of encephalitis or encephalomyelitis . In Beach et al.’s (2017) review, nine papers reported memantine treatment in nine cases . Again schizophrenia-spectrum illnesses were predominantly represented in this sample. Memantine was commonly prescribed as an adjunctive treatment in combination with benzodiazepines. added three unpublished memantine cases and reported the mean daily dose used for all 12 cases was 12.5 (SD: 6.2) mg. A few additional articles cite the benefits for catatonia of other medications that may act as glutamate antagonists. These include four cases of minocycline use and one case of dextromethorphan–quinidine use . In summary, reviews show that in 58 published cases plus other additional reports of amantadine and memantine use in catatonia of various aetiologies, substantial improvement was reported. This improvement usually occurred within a 7-day window . A bias towards the non-reporting of negative results must be considered, making the lack of RCTs and controlled studies an important shortcoming. The dopamine system modulates motivation and movement by informing the anterior cingulate cortex/mid-cingulate cortex when a task is associated with high predictive value (tonic dopamine) as well as when circumstances abruptly change to better or worse than predicted (phasic dopamine) . proposes that the dopamine system in the midbrain ventral tegmental area/substantia nigra functions as a manager of sorts for the CSTC circuits thought to be implicated in catatonia. The dopamine agonists and precursors can be hypothesised to treat catatonia by increasing dopamine modulation and by favouring the striosomal direct pathway as they do in akinetic mutism, leading to opening of the thalamic filter with feedforward activation of cortical regions including the supplementary motor area and primary motor cortex. Levodopa is a dopamine precursor that is often used in combination with a peripheral DOPA decarboxylase inhibitor (e.g. carbidopa and benserazide) in the treatment of Parkinson’s disease. A case report and small case series found marked improvement after treatment with levodopa, although the case series reported worsening of psychosis . Bromocriptine, a dopamine D 2 receptor agonist was used successfully in a 16-year-old girl with catatonia . There is also a literature on the use of dopamine agonists in the related conditions of NMS (see section ‘Neuroleptic malignant syndrome’) and akinetic mutism, a neurological condition associated with lesions to frontal-subcortical circuits . Methylphenidate is a noradrenaline and dopamine reuptake inhibitor. There have been five case reports of successful use of methylphenidate for catatonia . Most of these cases were due to mood disorders, and most used methylphenidate as monotherapy. The use of antipsychotics is one of the most controversial areas in catatonia management . Antipsychotic medications can induce catatonia (see section ‘Antipsychotic-induced catatonia’) and worsen it . Catatonia is also a risk factor for NMS , a severe antipsychotic-induced movement disorder. Moreover, in some studies of catatonia, the use of antipsychotics has been associated with poor outcomes . Nevertheless, dopamine receptor antagonists and partial agonists have been reported in some cases as beneficial in catatonia . This may particularly be the case in catatonic schizophrenia . There have been reports of the use of olanzapine , risperidone , ziprasidone , quetiapine and aripiprazole . Second-generation antipsychotics (SGAs) theoretically would be less likely to strongly antagonise dopamine receptors making them potentially less dangerous adjunctive treatments than first-generation antipsychotics (FGAs) in terms of NMS risk. Aripiprazole’s partial agonism might balance the dopaminergic effects and be of some benefit for catatonia . A recent Cochrane review found only one RCT of antipsychotics for schizophrenia spectrum disorders with catatonic features, and considered the evidence to be of very low quality due to a small sample size, short duration, risk of bias and other methodological issues . This RCT compared risperidone to ECT, finding greater improvement in the ECT-treated group . Given that D 2 receptor antagonists can worsen catatonia and trigger NMS in an at-risk group, some reviews have urged caution, especially in malignant catatonia . It has also been suggested that SGAs – or an FGA with weaker dopamine receptor affinity – should be preferred . Some sources suggest that antipsychotics should only be given in catatonia if co-administered with a benzodiazepine . One review concluded that there does not seem to be evidence to support the use of SGAs in patients with catatonia without an underlying psychosis . Two small studies have suggested that low serum iron in catatonia is associated with the development of NMS, leading some to suggest that serum iron may be used in catatonia to predict those who may develop NMS, but the evidence is not of a high quality . Regarding clozapine, a systematic review found there is some evidence from case reports and small uncontrolled observational studies that clozapine may be effective in catatonic schizophrenia . In the largest identified study, 55 patients with catatonic schizophrenia received clozapine, resulting in 2 cases of complete remission, 48 cases of partial remission and 5 cases of no remission . Where catatonia occurs in the context of clozapine withdrawal, a systematic review of case reports found that re-initiation of clozapine or the use of ECT was usually effective, while benzodiazepines were less reliable . Leaving aside the cases where catatonia is a presentation of NCSE , catatonia has occasionally been treated with anticonvulsant medications. Evidence consists of case series and case reports. Three articles have reported using carbamazepine to treat catatonia in seven cases . Most cases were associated with a mood disorder, and carbamazepine was found to be effective without the need for benzodiazepines. Doses ranged from 100 to 1000 mg daily, with six cases receiving 600 mg daily or greater. Valproic acid use in catatonia has been reported in four papers in which five patients were suffering with psychoses, mostly schizophrenia spectrum in nature. In three instances, excited catatonia was noted as part of the presentation. These patients were treated successfully with valproic acid . Doses ranged from 600 to 4000 mg daily. Another case series involving four cases highlighted the benefits of topiramate in the treatment of catatonia . Here too most of these patients had been diagnosed with schizophrenia-like illnesses. Topiramate was used as an adjunctive treatment along with a benzodiazepine. All four cases improved on 200 mg daily. Phenytoin has been reported to be effective in cases where catatonia has appeared in the context of bacterial meningoencephalitis, NCSE and frontal lobe seizures . Levetiracetam and zonisamide have each been used in one case along with aripiprazole . Two case reports described using benztropine IV as monotherapy to treat catatonia in two cases . In another case, trihexyphenidyl was used in combination with clozapine to treat catatonia . All patients had a schizophrenia-spectrum illness. And in a fourth case, several medications including trihexyphenidyl were used to treat catatonia in a young woman with Wilson’s disease . Muscle relaxants, calcium channel blockers and corticosteroids have all anecdotally been associated with improvement in isolated patients with catatonia . Lithium and other treatments for prophylaxis in periodic catatonia warrant particular attention and are considered in section ‘Periodic catatonia’. There are conditions and situations that discourage the use of ECT after non-response to benzodiazepines and second-line agents, and when maintenance ECT is required that offers a potential niche for newer neuromodulatory treatments such as repetitive transcranial magnetic stimulation (rTMS) and transcranial direct-current stimulation (tDCS) for the treatment of catatonia. Two systematic reviews have covered this topic and found that the majority of case reports and case series in the literature reported a positive response . rTMS over the bilateral dorsolateral PFC has been particularly emphasised . Adverse effects appear to be minimal . Recommendations on the use of other therapies Where first-line therapies for catatonia are unavailable, cautioned, ineffective or only partially effective, consider a trial of an NMDA receptor antagonist, either amantadine or memantine. (C) Where first-line therapies and NMDA receptor antagonists are unavailable, cautioned, ineffective or only partially effective, consider a trial of levodopa, a dopamine agonist, carbamazepine, valproate, topiramate or a SGA. (D) Antipsychotic medications should be avoided where there is no underlying psychotic disorder. (C) Where catatonia exists in the context of an underlying psychotic disorder, if antipsychotic medications are used, they should be prescribed with caution after an evaluation of the potential benefits and risks, including the risk of NMS. Additional caution should be exercised if there is low serum iron or a prior history of NMS. If antipsychotic medications are used, a SGA should be used with gradual titration, and co-administration of a benzodiazepine should be considered. (S) Where ECT is indicated but unavailable, consider treatment with rTMS or tDCS. (D) Periodic catatonia Periodic catatonia is a rare form of catatonia characterised by rapid-onset, brief, recurring episodes of hypokinetic or hyperkinetic catatonia . The typical episode may last 4–10 days, with an interepisodic period lasting weeks to years. Kraepelin first described it in the context of schizophrenia. Gjessing extensively studied this entity and published data mainly in German. His work has been summarised by . Leonhard considered periodic catatonia to be a form of unsystematic schizophrenia (i.e. genetically determined schizophrenia) compared to the systematic (nonperiodic and nonfamilial) form of schizophrenia . Based on this conceptualisation, subsequent research showed that periodic catatonia has an autosomal dominant pattern of transmission . Classically periodic catatonia has been reported to occur in association with schizophrenia, but it has also been reported in patients with affective disorders and occasionally in patients with substance use disorders , underlying medical illnesses and in association with menstrual cycles . It has also been reported in adolescents and the geriatric population . Studies that have focused on the clinical profile of patients with periodic catatonia during the different episodes in the same patients suggest consistency of clinical features across the various episodes . There is great uncertainty in the treatment of periodic catatonia, though several sources advise treatment in acute catatonic episodes along the lines of other cases of catatonia with benzodiazepines and ECT , and several reports support this . However, some cases do not respond to these treatments. There is also the important issue of maintenance treatment to prevent catatonic episodes. Lithium is the most frequently reported agent used in the maintenance of periodic catatonia, but even this evidence relies only on case reports and small case series . Case reports have reported success with mirtazapine and clomipramine in the maintenance treatment of periodic catatonia in patients with depressive disorder . In another case report, authors reported the effectiveness of fluoxetine (20 mg/day) and fluphenazine in the maintenance treatment of periodic catatonia in a patient with schizoaffective disorder . Case reports or series have reported the role of lamotrigine and carbamazepine in the maintenance treatment of periodic catatonia. Despite the potential risk of NMS when antipsychotics are used in catatonia, some have reported a beneficial role of olanzapine , ziprasidone and risperidone in the long-term treatment of periodic catatonia. Recommendation on periodic catatonia In the maintenance phase of periodic catatonia, consider prophylactic treatment with lithium. (D) Malignant catatonia Catatonia may be conceptualised as a continuum, with milder forms at one end (termed simple or benign) and more severe forms, involving hyperthermia and autonomic dysfunction (termed malignant), at the other . described ‘lethal catatonia’ as a fulminant psychotic disorder characterised by intense motor excitement, which progressed to stuporous exhaustion, cardiovascular collapse, coma and death. The entire course, passing through excitement into stupor, involved mounting hyperthermia, autonomic instability, delirium, muscle rigidity and prominent catatonic features. The paucity of findings on autopsy was difficult to explain and in sharp contrast to the catastrophic clinical manifestations. This disorder was the subject of numerous publications throughout the pre-antipsychotic drug era. Competing terminology included Bell’s mania, acute delirious mania, pernicious catatonia and delirium acutum, among numerous others. More recently, the term malignant catatonia has been proposed, since not all cases are fatal . Unlike Stauder, some authors have observed that muscle tone in malignant catatonia is flaccid . Although the incidence of malignant catatonia appears to have declined following the introduction of modern psychopharmacologic agents, it continues to be reported. Like non-malignant catatonia, malignant catatonia represents a syndrome rather than a specific disease, occurring in association with diverse neuromedical illnesses as well as with psychiatric disorders. Current data suggest that it is likely that a proportion of malignant catatonia cases previously attributed to schizophrenia were more likely the product of autoimmune disorders, particularly anti-NMDAR encephalitis . Mortality, which had exceeded 75% during the pre-antipsychotic drug era, has fallen to 10% in recent reports . Although some qualified support may exist for the use of SGAs in non-malignant catatonia (see section ‘Dopamine receptor antagonists and partial agonists’), the literature on antipsychotics for malignant catatonia is rather different. First, there is an issue that malignant catatonia is generally clinically indistinguishable from NMS, so antipsychotics seem injudicious . Second, in a review of 292 malignant catatonia cases , 78% of those treated with only an antipsychotic died, compared with an overall mortality of 60%. Moreover, this review found that many patients with catatonia developed malignant features only after treatment with antipsychotics . The evidence for the SGAs in malignant catatonia is minimal and mixed . Antipsychotic drugs should be withheld whenever malignant catatonia is suspected. Since RCTs are unavailable, treatment recommendations for malignant catatonia are based on case reports or case series. Five international guidelines for the management of schizophrenia specifically address the treatment of malignant catatonia , although they are based on low levels of evidence. Each of the guidelines recommends ECT either as the initial treatment or as second line after a failed benzodiazepine trial. Although the benefits of benzodiazepines in malignant catatonia are less consistent than in non-malignant catatonia, a review of 44 cases found that there was clear benefit in about a third, transient or partial improvement in a third and no benefit in the remainder , so a benzodiazepine trial seems reasonable. Doses as high as 24 mg of lorazepam per day may be required. However, if benzodiazepines are not rapidly effective, ECT should be started within 48–72 h following the onset of malignant catatonia . ECT appears to be a safe and effective treatment for malignant catatonia occurring in association with a psychiatric disorder. Among 68 patients reported in five series , 51 of 54 treated with ECT survived, whereas only 6 of 14 who received antipsychotics and supportive care recovered. Still, ECT appears effective only if initiated before severe progression of malignant catatonia. In another series , although 16 of 19 patients receiving ECT within 5 days of malignant catatonia onset survived, none of 14 patients starting ECT beyond that 5-day point recovered. In view of the life-threatening potential of malignant catatonia, bilateral treatments daily or twice daily for 3–5 days are often required to achieve a rapid result, followed by ECT at conventional frequencies until complete resolution . In addition, ECT has been effective as a symptomatic measure in malignant catatonia complicating a diversity of medical conditions, such as anti-NMDAR encephalitis, permitting resolution of the underlying condition. An older body of case series data had suggested that malignant catatonia could be successfully treated with adrenocorticotropic hormone and corticosteroids . However, the interpretation of these reports may be compromised by the simultaneous use of ECT in many cases. Other proposed treatments have included bromocriptine, amantadine, memantine and calcitonin . A single case report observed dramatic resolution of malignant catatonia with rTMS . Although one case report noted rapid improvement in malignant catatonia with tDCS , a second found no effect . Like non-malignant catatonia, rTMS and tDCS could prove promising in malignant catatonia where ECT is indicated but not possible. However, further investigation is necessary. Finally, a couple of case reports have reported benefit from propofol in malignant catatonia , which may possibly be useful if ECT is delayed . Recommendations for the treatment of malignant catatonia In malignant catatonia, discontinue all dopamine antagonists. (D) In malignant catatonia, commence a trial of lorazepam at 8 mg/day (PO, IM or IV), titrating up according to response and tolerability up to a maximum of 24 mg/day. (C) If there is partial or no response to lorazepam within 48–72 h in malignant catatonia, institute bilateral ECT once or twice daily for up to 5 days until malignant catatonia abates, followed by ECT three times per week until there is sustained improvement, usually 5–20 treatments in total. (D) Neuroleptic malignant syndrome NMS is a rare and potentially lethal idiosyncratic reaction to treatment with dopamine antagonists. Like malignant catatonia, NMS involves altered consciousness with catatonia, muscle rigidity, hyperthermia and autonomic dysfunction. Recent reports suggest a prevalence of 0.02–0.03%, much lower than the 1–3% reported in the 1980s . Mortality has declined over the years to an average of less than 10% . Virtually, all classes of drugs that induce dopamine receptor blockade have been implicated in causing NMS, with antipsychotics that have higher affinity for the D 2 receptor posing the greatest risk . However, SGAs have also been associated with NMS, although they may result in an ‘atypical’ presentation with less severe or absent rigidity or hyperthermia . NMS may also occur with dopamine-blocking drugs used as antiemetics, with dopamine depleting drugs, and during dopamine agonist withdrawal. About two-thirds of cases develop within the first 1–2 weeks after drug initiation. Laboratory abnormalities are nonspecific but commonly include elevated serum CK, leucocytosis and low serum iron resembling malignant catatonia. Several authors have proposed that NMS represents an antipsychotic drug-induced toxic or iatrogenic subtype of malignant catatonia . Two retrospective studies of hospitalised patients meeting stringent criteria for NMS found that, in total, 42 out of 43 episodes also met DSM-IV criteria for catatonia . As mentioned in section ‘Dopamine receptor antagonists and partial agonists’, antipsychotic drugs can precipitate and worsen catatonia, while catatonia is a risk factor for NMS. Others, however, have asserted that malignant catatonia and NMS represent two distinct entities, suggesting that excited or agitated behaviour points to malignant catatonia . A prodromal phase involving agitation and affective disturbance is perhaps more common in malignant catatonia but is not universally present. . However, agitation is a common feature of the psychosis preceding NMS for which antipsychotics were originally used. Prominent muscle rigidity has also been proposed as a distinguishing feature . Nonetheless, since patients with hyperactivity or psychotic features usually receive medications early in treatment, it may be difficult to know if the presence of rigidity represents NMS or drug-induced extrapyramidal side effects superimposed on malignant catatonia. Furthermore, many malignant catatonia cases in the era prior to antipsychotic therapy did present with rigidity. At a minimum, differentiating between NMS and malignant catatonia where antipsychotic medications have been used is acknowledged to be very challenging . The most important factor in improving survival in NMS is discontinuation of dopamine-blocking medications . With cessation of dopamine-blocking drugs and supportive medical care, NMS is in most cases a self-limiting disorder with a mean recovery time of 7–10 days. Anticholinergic medications, which impair heat loss through reduction of sweating, should also be discontinued . Beyond these measures, there is limited consensus regarding the optimal therapeutic approach to NMS. It is difficult to compare specific treatments because NMS is rare, usually self-limiting, and heterogeneous in onset, progression and outcome, which renders RCTs challenging . Nevertheless, therapies that have been reported as successful in the treatment of NMS include benzodiazepines, dopamine agonists, dantrolene and ECT . The use of benzodiazepines for treating NMS is not surprising given the proposed overlap between NMS, catatonia and malignant catatonia. Several case reports and series have found that benzodiazepines have been associated with improvements in some individuals with NMS , though this response is sometimes transient . However, they are not effective in all patients and one prospective study of 14 episodes of NMS found that while seven out of nine patients with catatonic features responded to benzodiazepines, none of the five patients without catatonic features responded . Given that risks are small and benefits possibly marked, several sources suggest a trial of benzodiazepines . Some evidence suggests that NMS results from a reduction of dopaminergic activity in the brain, such that dopamine agonists may reduce that deficit and facilitate resolution of the syndrome . Systematic reviews of case reports have found that the dopaminergic medications, bromocriptine and amantadine, are associated with reduced mortality , and bromocriptine is associated with a reduced time to clinical response . Although levodopa has been used in only a limited number of reported NMS cases, it was thought to be effective in half the case reports and dramatic improvements were observed in some cases, even after failure to respond to dantrolene . Newer dopamine agonists developed for transdermal delivery may facilitate administration of dopamine drugs under extreme circumstances (e.g. rotigotine) . Temperature elevation in NMS is theorised to result from antipsychotic drug-induced impairment of central heat loss mechanisms in combination with excess heat production secondary to peripheral hypermetabolism and rigidity of skeletal muscle. Dantrolene, which inhibits contraction and heat production in muscle, may benefit those cases of NMS with extreme temperature elevations, severe rigidity and true hypermetabolism . In one systematic review , where dantrolene was used in 101 NMS patients and was the only medication used in 50%, improvement was reported in 81%. Furthermore, mortality was decreased by nearly half compared with supportive care alone . reported a positive response to dantrolene in 105 (74.5%) of 141 NMS patients. Intravenous dantrolene should not be co-administered with calcium channel blockers (particularly verapamil and diltiazem; amlodipine and nifedipine may be safer alternatives), as hyperkalaemia and cardiovascular collapse can occur . The pharmacological agents discussed above are generally effective within the first several days of NMS . If, despite adequate dosing, a response has not been achieved by 2–3 days, a delayed response is unlikely and ECT should be considered. A review of 40 cases where ECT was used as a treatment primarily for NMS found that there was complete recovery in 25 cases (63%) and partial recovery in a further 11 (28%), although reporting bias is a significant concern . Response often occurs during the first few treatments, although some cases have required multiple ECTs in a single day . Furthermore, ECT has the advantages of treating some underlying conditions during acute NMS when antipsychotics must be avoided and in treating a prolonged, residual catatonic or parkinsonian state, which has been observed following NMS . conducted a literature review that found the mortality of 48 NMS patients treated with ECT was 10% compared with 21% for patients treated with supportive care alone. retrospectively identified 15 NMS patients treated with ECT at their centre over a 17-year period and reported a mortality rate of 6.7%. ECT should therefore be considered as an initial therapy when NMS is severe and the risk of complications is high. Patients with NMS are not considered at risk for malignant hyperthermia during ECT . However, succinylcholine can cause hyperkalaemia and arrhythmias in patients with severe rhabdomyolysis, which may explain instances of cardiac complications in NMS patients treated with ECT . Alternative muscle relaxants should be considered in patients at risk. Treatment recommendations for NMS are not uniform and – in the absence of RCTs – any recommendations should be made with caution. In a prospective study of 20 NMS patients, observed that those receiving dantrolene (two patients), bromocriptine (two patients) or both (four patients) had a more prolonged course and more sequalae than those treated only with supportive care, leading the authors to question the efficacy of either agent. More recently, conducted a systematic review of 405 NMS cases comparing patients treated with dantrolene, bromocriptine and ECT with those receiving supportive care alone. Cases were defined as mild, moderate or severe using the criteria. Across the entire sample, independent of severity levels, differences in mortality rates with specific therapies compared to supportive care alone were not statistically significant. However, in severe NMS, mortality rates proved significantly lower with each of dantrolene, bromocriptine and ECT compared to supportive care. The authors concluded that supportive care alone could be sufficient for the treatment of mild to moderate NMS, but that specific therapies were indicated for severe NMS. A series of international guidelines for the management of schizophrenia contain certain specific recommendations for the treatment of NMS, but these are based on weak levels of evidence and do not consider all relevant treatment options . contend that expert-based treatment algorithms derived from clinical experience, numerous clinical reports and rational theories are of greater value than recommendations provided by the guidelines. These algorithms stress that the specific treatment of NMS be individualised and based on the character, duration and severity or stage of clinical features . In general, the first steps include supportive care and discontinuing dopamine blocking agents and anticholinergics. Benzodiazepines are also widely recommended as an initial intervention for patients with mild NMS characterised by mild rigidity, catatonia or confusion, temperature < 38°C, HR < 100 . Trials of bromocriptine, amantadine or other dopamine agonists may be a reasonable next step in patients with moderate NMS involving prominent parkinsonian signs and temperatures in the range of 38–40°C. Dantrolene appears beneficial primarily when extreme hyperthermia (>40°C) and severe rigidity develop. Although many patients respond to pharmacotherapy, none of the above medications have been reliably effective in all reported cases of NMS. As reviewed above, ECT may remain effective even late during treatment, as opposed to pharmacotherapies, and after pharmacotherapies have failed . Among patients who recover from NMS, there may be a 30% risk of recurrent episodes following antipsychotic rechallenge . However, most patients who require antipsychotics can be safely treated provided measures to reduce risk are followed. Strategies suggested are minimising other risk factors for NMS (such as agitation, medical illness and dehydration), allowing at least 2 weeks from recovery before rechallenge, using a low dose of a SGA with gradual titration and careful monitoring for early signs of NMS . Recommendations for the treatment of NMS In NMS, discontinue all dopamine antagonists. (C) In NMS, discontinue anticholinergic drugs. (S) In NMS, supportive care should be provided. This consists of assessment and appropriate management of airway, ventilation, temperature and swallow. Fluid input/output should be monitored, and aggressive fluid resuscitation should be used where required. There should be assessment for hyperkalaemia, renal failure and rhabdomyolysis. There should be careful monitoring for complications such as cardiorespiratory failure, aspiration pneumonia, thromboembolism and renal failure, alongside early consideration of high-dependency care. (S) For mild, early NMS, characterised by mild rigidity, catatonia or confusion, temperature < 38°C and HR < 100, consider a trial of lorazepam. (C) For moderate NMS, characterised by moderate rigidity, catatonia or confusion, temperature = 38°C–40°C and HR = 100–120, consider a trial of lorazepam. Consider a trial of bromocriptine or amantadine. Consider ECT. (C) For severe NMS, characterised by severe rigidity, catatonia or coma, temperature > 40°C and HR > 120, consider a trial of lorazepam and consider dantrolene. Consider bromocriptine or amantadine. Consider ECT. (C). If clinical features persist, consider bilateral ECT three times weekly or, in severe cases, once or twice daily, until NMS abates. Continue ECT three times per week until there is sustained improvement to a total of 5–20 treatments. (C) Delay restarting antipsychotics by at least 2 weeks after resolution of an NMS episode to reduce the risk of recurrence. (C) Antipsychotic-induced catatonia Antipsychotics with strong dopamine receptor affinity in particular can lead to the development of antipsychotic-induced catatonia . Antipsychotic-induced catatonia can occur in association with FGAs and probably less frequently with SGAs , and may develop within hours after the first administration of an antipsychotic agent. Diagnosis is often complicated by a question over whether the catatonia is intrinsic to the psychiatric illness or induced by its treatment, the so-called ‘catatonic dilemma’ . The incidence of and risk factors for antipsychotic-induced catatonia are currently unclear. The catatonic signs of akinesia, stupor and mutism are more often associated with antipsychotics whereas catalepsy and waxy flexibility are less common in antipsychotic-induced catatonia . More complex catatonic behavioural abnormalities, such as echolalia, echopraxia, verbigeration or Mitgehen , are not generally reported in association with antipsychotic treatment. The primary intervention for antipsychotic-induced catatonia is discontinuation of the antipsychotic agent. In some cases, this is sufficient on its own . Other possible options are reducing the dose or switching to an antipsychotic with lower affinity for the dopamine receptors. Benzodiazepines may also be helpful. In one prospective cohort study including 18 patients with antipsychotic-induced catatonia, all were administered lorazepam, of whom 14 had complete remission and 4 had some partial response . Of the partial responders, three were administered amantadine, which was associated with a prompt recovery. Anticholinergics were ineffective in six patients before they were administered benzodiazepines. Good response to lorazepam has also been reported in other case series . Amantadine has also been reported to be helpful in a case series . There is a lack of data on the prophylaxis of antipsychotic-induced catatonia. Recommendations for antipsychotic-induced catatonia When catatonia is attributed to antipsychotic administration, consider discontinuing the antipsychotic. (C) In more severe cases or cases that do not resolve with antipsychotic discontinuation, consider a trial of a benzodiazepine. (C) Once catatonia is treated, if an antipsychotic is still necessary, commence at a low dose and titrate gradually, closely monitoring for side effects. (S) Periodic catatonia is a rare form of catatonia characterised by rapid-onset, brief, recurring episodes of hypokinetic or hyperkinetic catatonia . The typical episode may last 4–10 days, with an interepisodic period lasting weeks to years. Kraepelin first described it in the context of schizophrenia. Gjessing extensively studied this entity and published data mainly in German. His work has been summarised by . Leonhard considered periodic catatonia to be a form of unsystematic schizophrenia (i.e. genetically determined schizophrenia) compared to the systematic (nonperiodic and nonfamilial) form of schizophrenia . Based on this conceptualisation, subsequent research showed that periodic catatonia has an autosomal dominant pattern of transmission . Classically periodic catatonia has been reported to occur in association with schizophrenia, but it has also been reported in patients with affective disorders and occasionally in patients with substance use disorders , underlying medical illnesses and in association with menstrual cycles . It has also been reported in adolescents and the geriatric population . Studies that have focused on the clinical profile of patients with periodic catatonia during the different episodes in the same patients suggest consistency of clinical features across the various episodes . There is great uncertainty in the treatment of periodic catatonia, though several sources advise treatment in acute catatonic episodes along the lines of other cases of catatonia with benzodiazepines and ECT , and several reports support this . However, some cases do not respond to these treatments. There is also the important issue of maintenance treatment to prevent catatonic episodes. Lithium is the most frequently reported agent used in the maintenance of periodic catatonia, but even this evidence relies only on case reports and small case series . Case reports have reported success with mirtazapine and clomipramine in the maintenance treatment of periodic catatonia in patients with depressive disorder . In another case report, authors reported the effectiveness of fluoxetine (20 mg/day) and fluphenazine in the maintenance treatment of periodic catatonia in a patient with schizoaffective disorder . Case reports or series have reported the role of lamotrigine and carbamazepine in the maintenance treatment of periodic catatonia. Despite the potential risk of NMS when antipsychotics are used in catatonia, some have reported a beneficial role of olanzapine , ziprasidone and risperidone in the long-term treatment of periodic catatonia. Recommendation on periodic catatonia In the maintenance phase of periodic catatonia, consider prophylactic treatment with lithium. (D) Catatonia may be conceptualised as a continuum, with milder forms at one end (termed simple or benign) and more severe forms, involving hyperthermia and autonomic dysfunction (termed malignant), at the other . described ‘lethal catatonia’ as a fulminant psychotic disorder characterised by intense motor excitement, which progressed to stuporous exhaustion, cardiovascular collapse, coma and death. The entire course, passing through excitement into stupor, involved mounting hyperthermia, autonomic instability, delirium, muscle rigidity and prominent catatonic features. The paucity of findings on autopsy was difficult to explain and in sharp contrast to the catastrophic clinical manifestations. This disorder was the subject of numerous publications throughout the pre-antipsychotic drug era. Competing terminology included Bell’s mania, acute delirious mania, pernicious catatonia and delirium acutum, among numerous others. More recently, the term malignant catatonia has been proposed, since not all cases are fatal . Unlike Stauder, some authors have observed that muscle tone in malignant catatonia is flaccid . Although the incidence of malignant catatonia appears to have declined following the introduction of modern psychopharmacologic agents, it continues to be reported. Like non-malignant catatonia, malignant catatonia represents a syndrome rather than a specific disease, occurring in association with diverse neuromedical illnesses as well as with psychiatric disorders. Current data suggest that it is likely that a proportion of malignant catatonia cases previously attributed to schizophrenia were more likely the product of autoimmune disorders, particularly anti-NMDAR encephalitis . Mortality, which had exceeded 75% during the pre-antipsychotic drug era, has fallen to 10% in recent reports . Although some qualified support may exist for the use of SGAs in non-malignant catatonia (see section ‘Dopamine receptor antagonists and partial agonists’), the literature on antipsychotics for malignant catatonia is rather different. First, there is an issue that malignant catatonia is generally clinically indistinguishable from NMS, so antipsychotics seem injudicious . Second, in a review of 292 malignant catatonia cases , 78% of those treated with only an antipsychotic died, compared with an overall mortality of 60%. Moreover, this review found that many patients with catatonia developed malignant features only after treatment with antipsychotics . The evidence for the SGAs in malignant catatonia is minimal and mixed . Antipsychotic drugs should be withheld whenever malignant catatonia is suspected. Since RCTs are unavailable, treatment recommendations for malignant catatonia are based on case reports or case series. Five international guidelines for the management of schizophrenia specifically address the treatment of malignant catatonia , although they are based on low levels of evidence. Each of the guidelines recommends ECT either as the initial treatment or as second line after a failed benzodiazepine trial. Although the benefits of benzodiazepines in malignant catatonia are less consistent than in non-malignant catatonia, a review of 44 cases found that there was clear benefit in about a third, transient or partial improvement in a third and no benefit in the remainder , so a benzodiazepine trial seems reasonable. Doses as high as 24 mg of lorazepam per day may be required. However, if benzodiazepines are not rapidly effective, ECT should be started within 48–72 h following the onset of malignant catatonia . ECT appears to be a safe and effective treatment for malignant catatonia occurring in association with a psychiatric disorder. Among 68 patients reported in five series , 51 of 54 treated with ECT survived, whereas only 6 of 14 who received antipsychotics and supportive care recovered. Still, ECT appears effective only if initiated before severe progression of malignant catatonia. In another series , although 16 of 19 patients receiving ECT within 5 days of malignant catatonia onset survived, none of 14 patients starting ECT beyond that 5-day point recovered. In view of the life-threatening potential of malignant catatonia, bilateral treatments daily or twice daily for 3–5 days are often required to achieve a rapid result, followed by ECT at conventional frequencies until complete resolution . In addition, ECT has been effective as a symptomatic measure in malignant catatonia complicating a diversity of medical conditions, such as anti-NMDAR encephalitis, permitting resolution of the underlying condition. An older body of case series data had suggested that malignant catatonia could be successfully treated with adrenocorticotropic hormone and corticosteroids . However, the interpretation of these reports may be compromised by the simultaneous use of ECT in many cases. Other proposed treatments have included bromocriptine, amantadine, memantine and calcitonin . A single case report observed dramatic resolution of malignant catatonia with rTMS . Although one case report noted rapid improvement in malignant catatonia with tDCS , a second found no effect . Like non-malignant catatonia, rTMS and tDCS could prove promising in malignant catatonia where ECT is indicated but not possible. However, further investigation is necessary. Finally, a couple of case reports have reported benefit from propofol in malignant catatonia , which may possibly be useful if ECT is delayed . Recommendations for the treatment of malignant catatonia In malignant catatonia, discontinue all dopamine antagonists. (D) In malignant catatonia, commence a trial of lorazepam at 8 mg/day (PO, IM or IV), titrating up according to response and tolerability up to a maximum of 24 mg/day. (C) If there is partial or no response to lorazepam within 48–72 h in malignant catatonia, institute bilateral ECT once or twice daily for up to 5 days until malignant catatonia abates, followed by ECT three times per week until there is sustained improvement, usually 5–20 treatments in total. (D) NMS is a rare and potentially lethal idiosyncratic reaction to treatment with dopamine antagonists. Like malignant catatonia, NMS involves altered consciousness with catatonia, muscle rigidity, hyperthermia and autonomic dysfunction. Recent reports suggest a prevalence of 0.02–0.03%, much lower than the 1–3% reported in the 1980s . Mortality has declined over the years to an average of less than 10% . Virtually, all classes of drugs that induce dopamine receptor blockade have been implicated in causing NMS, with antipsychotics that have higher affinity for the D 2 receptor posing the greatest risk . However, SGAs have also been associated with NMS, although they may result in an ‘atypical’ presentation with less severe or absent rigidity or hyperthermia . NMS may also occur with dopamine-blocking drugs used as antiemetics, with dopamine depleting drugs, and during dopamine agonist withdrawal. About two-thirds of cases develop within the first 1–2 weeks after drug initiation. Laboratory abnormalities are nonspecific but commonly include elevated serum CK, leucocytosis and low serum iron resembling malignant catatonia. Several authors have proposed that NMS represents an antipsychotic drug-induced toxic or iatrogenic subtype of malignant catatonia . Two retrospective studies of hospitalised patients meeting stringent criteria for NMS found that, in total, 42 out of 43 episodes also met DSM-IV criteria for catatonia . As mentioned in section ‘Dopamine receptor antagonists and partial agonists’, antipsychotic drugs can precipitate and worsen catatonia, while catatonia is a risk factor for NMS. Others, however, have asserted that malignant catatonia and NMS represent two distinct entities, suggesting that excited or agitated behaviour points to malignant catatonia . A prodromal phase involving agitation and affective disturbance is perhaps more common in malignant catatonia but is not universally present. . However, agitation is a common feature of the psychosis preceding NMS for which antipsychotics were originally used. Prominent muscle rigidity has also been proposed as a distinguishing feature . Nonetheless, since patients with hyperactivity or psychotic features usually receive medications early in treatment, it may be difficult to know if the presence of rigidity represents NMS or drug-induced extrapyramidal side effects superimposed on malignant catatonia. Furthermore, many malignant catatonia cases in the era prior to antipsychotic therapy did present with rigidity. At a minimum, differentiating between NMS and malignant catatonia where antipsychotic medications have been used is acknowledged to be very challenging . The most important factor in improving survival in NMS is discontinuation of dopamine-blocking medications . With cessation of dopamine-blocking drugs and supportive medical care, NMS is in most cases a self-limiting disorder with a mean recovery time of 7–10 days. Anticholinergic medications, which impair heat loss through reduction of sweating, should also be discontinued . Beyond these measures, there is limited consensus regarding the optimal therapeutic approach to NMS. It is difficult to compare specific treatments because NMS is rare, usually self-limiting, and heterogeneous in onset, progression and outcome, which renders RCTs challenging . Nevertheless, therapies that have been reported as successful in the treatment of NMS include benzodiazepines, dopamine agonists, dantrolene and ECT . The use of benzodiazepines for treating NMS is not surprising given the proposed overlap between NMS, catatonia and malignant catatonia. Several case reports and series have found that benzodiazepines have been associated with improvements in some individuals with NMS , though this response is sometimes transient . However, they are not effective in all patients and one prospective study of 14 episodes of NMS found that while seven out of nine patients with catatonic features responded to benzodiazepines, none of the five patients without catatonic features responded . Given that risks are small and benefits possibly marked, several sources suggest a trial of benzodiazepines . Some evidence suggests that NMS results from a reduction of dopaminergic activity in the brain, such that dopamine agonists may reduce that deficit and facilitate resolution of the syndrome . Systematic reviews of case reports have found that the dopaminergic medications, bromocriptine and amantadine, are associated with reduced mortality , and bromocriptine is associated with a reduced time to clinical response . Although levodopa has been used in only a limited number of reported NMS cases, it was thought to be effective in half the case reports and dramatic improvements were observed in some cases, even after failure to respond to dantrolene . Newer dopamine agonists developed for transdermal delivery may facilitate administration of dopamine drugs under extreme circumstances (e.g. rotigotine) . Temperature elevation in NMS is theorised to result from antipsychotic drug-induced impairment of central heat loss mechanisms in combination with excess heat production secondary to peripheral hypermetabolism and rigidity of skeletal muscle. Dantrolene, which inhibits contraction and heat production in muscle, may benefit those cases of NMS with extreme temperature elevations, severe rigidity and true hypermetabolism . In one systematic review , where dantrolene was used in 101 NMS patients and was the only medication used in 50%, improvement was reported in 81%. Furthermore, mortality was decreased by nearly half compared with supportive care alone . reported a positive response to dantrolene in 105 (74.5%) of 141 NMS patients. Intravenous dantrolene should not be co-administered with calcium channel blockers (particularly verapamil and diltiazem; amlodipine and nifedipine may be safer alternatives), as hyperkalaemia and cardiovascular collapse can occur . The pharmacological agents discussed above are generally effective within the first several days of NMS . If, despite adequate dosing, a response has not been achieved by 2–3 days, a delayed response is unlikely and ECT should be considered. A review of 40 cases where ECT was used as a treatment primarily for NMS found that there was complete recovery in 25 cases (63%) and partial recovery in a further 11 (28%), although reporting bias is a significant concern . Response often occurs during the first few treatments, although some cases have required multiple ECTs in a single day . Furthermore, ECT has the advantages of treating some underlying conditions during acute NMS when antipsychotics must be avoided and in treating a prolonged, residual catatonic or parkinsonian state, which has been observed following NMS . conducted a literature review that found the mortality of 48 NMS patients treated with ECT was 10% compared with 21% for patients treated with supportive care alone. retrospectively identified 15 NMS patients treated with ECT at their centre over a 17-year period and reported a mortality rate of 6.7%. ECT should therefore be considered as an initial therapy when NMS is severe and the risk of complications is high. Patients with NMS are not considered at risk for malignant hyperthermia during ECT . However, succinylcholine can cause hyperkalaemia and arrhythmias in patients with severe rhabdomyolysis, which may explain instances of cardiac complications in NMS patients treated with ECT . Alternative muscle relaxants should be considered in patients at risk. Treatment recommendations for NMS are not uniform and – in the absence of RCTs – any recommendations should be made with caution. In a prospective study of 20 NMS patients, observed that those receiving dantrolene (two patients), bromocriptine (two patients) or both (four patients) had a more prolonged course and more sequalae than those treated only with supportive care, leading the authors to question the efficacy of either agent. More recently, conducted a systematic review of 405 NMS cases comparing patients treated with dantrolene, bromocriptine and ECT with those receiving supportive care alone. Cases were defined as mild, moderate or severe using the criteria. Across the entire sample, independent of severity levels, differences in mortality rates with specific therapies compared to supportive care alone were not statistically significant. However, in severe NMS, mortality rates proved significantly lower with each of dantrolene, bromocriptine and ECT compared to supportive care. The authors concluded that supportive care alone could be sufficient for the treatment of mild to moderate NMS, but that specific therapies were indicated for severe NMS. A series of international guidelines for the management of schizophrenia contain certain specific recommendations for the treatment of NMS, but these are based on weak levels of evidence and do not consider all relevant treatment options . contend that expert-based treatment algorithms derived from clinical experience, numerous clinical reports and rational theories are of greater value than recommendations provided by the guidelines. These algorithms stress that the specific treatment of NMS be individualised and based on the character, duration and severity or stage of clinical features . In general, the first steps include supportive care and discontinuing dopamine blocking agents and anticholinergics. Benzodiazepines are also widely recommended as an initial intervention for patients with mild NMS characterised by mild rigidity, catatonia or confusion, temperature < 38°C, HR < 100 . Trials of bromocriptine, amantadine or other dopamine agonists may be a reasonable next step in patients with moderate NMS involving prominent parkinsonian signs and temperatures in the range of 38–40°C. Dantrolene appears beneficial primarily when extreme hyperthermia (>40°C) and severe rigidity develop. Although many patients respond to pharmacotherapy, none of the above medications have been reliably effective in all reported cases of NMS. As reviewed above, ECT may remain effective even late during treatment, as opposed to pharmacotherapies, and after pharmacotherapies have failed . Among patients who recover from NMS, there may be a 30% risk of recurrent episodes following antipsychotic rechallenge . However, most patients who require antipsychotics can be safely treated provided measures to reduce risk are followed. Strategies suggested are minimising other risk factors for NMS (such as agitation, medical illness and dehydration), allowing at least 2 weeks from recovery before rechallenge, using a low dose of a SGA with gradual titration and careful monitoring for early signs of NMS . Recommendations for the treatment of NMS In NMS, discontinue all dopamine antagonists. (C) In NMS, discontinue anticholinergic drugs. (S) In NMS, supportive care should be provided. This consists of assessment and appropriate management of airway, ventilation, temperature and swallow. Fluid input/output should be monitored, and aggressive fluid resuscitation should be used where required. There should be assessment for hyperkalaemia, renal failure and rhabdomyolysis. There should be careful monitoring for complications such as cardiorespiratory failure, aspiration pneumonia, thromboembolism and renal failure, alongside early consideration of high-dependency care. (S) For mild, early NMS, characterised by mild rigidity, catatonia or confusion, temperature < 38°C and HR < 100, consider a trial of lorazepam. (C) For moderate NMS, characterised by moderate rigidity, catatonia or confusion, temperature = 38°C–40°C and HR = 100–120, consider a trial of lorazepam. Consider a trial of bromocriptine or amantadine. Consider ECT. (C) For severe NMS, characterised by severe rigidity, catatonia or coma, temperature > 40°C and HR > 120, consider a trial of lorazepam and consider dantrolene. Consider bromocriptine or amantadine. Consider ECT. (C). If clinical features persist, consider bilateral ECT three times weekly or, in severe cases, once or twice daily, until NMS abates. Continue ECT three times per week until there is sustained improvement to a total of 5–20 treatments. (C) Delay restarting antipsychotics by at least 2 weeks after resolution of an NMS episode to reduce the risk of recurrence. (C) Antipsychotics with strong dopamine receptor affinity in particular can lead to the development of antipsychotic-induced catatonia . Antipsychotic-induced catatonia can occur in association with FGAs and probably less frequently with SGAs , and may develop within hours after the first administration of an antipsychotic agent. Diagnosis is often complicated by a question over whether the catatonia is intrinsic to the psychiatric illness or induced by its treatment, the so-called ‘catatonic dilemma’ . The incidence of and risk factors for antipsychotic-induced catatonia are currently unclear. The catatonic signs of akinesia, stupor and mutism are more often associated with antipsychotics whereas catalepsy and waxy flexibility are less common in antipsychotic-induced catatonia . More complex catatonic behavioural abnormalities, such as echolalia, echopraxia, verbigeration or Mitgehen , are not generally reported in association with antipsychotic treatment. The primary intervention for antipsychotic-induced catatonia is discontinuation of the antipsychotic agent. In some cases, this is sufficient on its own . Other possible options are reducing the dose or switching to an antipsychotic with lower affinity for the dopamine receptors. Benzodiazepines may also be helpful. In one prospective cohort study including 18 patients with antipsychotic-induced catatonia, all were administered lorazepam, of whom 14 had complete remission and 4 had some partial response . Of the partial responders, three were administered amantadine, which was associated with a prompt recovery. Anticholinergics were ineffective in six patients before they were administered benzodiazepines. Good response to lorazepam has also been reported in other case series . Amantadine has also been reported to be helpful in a case series . There is a lack of data on the prophylaxis of antipsychotic-induced catatonia. Recommendations for antipsychotic-induced catatonia When catatonia is attributed to antipsychotic administration, consider discontinuing the antipsychotic. (C) In more severe cases or cases that do not resolve with antipsychotic discontinuation, consider a trial of a benzodiazepine. (C) Once catatonia is treated, if an antipsychotic is still necessary, commence at a low dose and titrate gradually, closely monitoring for side effects. (S) Children and adolescents The prevalence of catatonia in modern child psychiatry has a wide range from 0.6% to 17%; the lowest prevalence being reported in a French study of adolescents, and the highest in a UK study of young people with autism . An Indian study reported an inpatient paediatric prevalence of 5.5% . Most cases appear to occur in adolescence: the 119 paediatric cases in a large cohort study had a mean age of 14.6 (SD: 2.7) years, although age ranged from 5 to 17 years . The potential aetiologies of catatonia in youth span the same psychiatric and medical categories as adults, and while affective and psychotic processes are most commonly found, appropriate assessment of other potential aetiologies, as detailed in section ‘Clinical assessment’, are indicated based on clinical history, evaluation and examination. In recent years, paediatric catatonia has been increasingly recognised in anti-NMDA receptor encephalitis, being found in over a third of affected children in one study . The Pediatric Catatonia Rating Scale (PCRS) is modified from the BFCRS and has been shown to be applicable in young people . Given the paucity of evidence in children and adolescents, current treatment paradigms are based on those recommended for adults, combined with case reports, case series and international clinical experience. This seems reasonable, particularly where cases occur among adolescents. In terms of benzodiazepines, in a case series of 66 children and adolescents who were hospitalised for catatonia, 51 received benzodiazepines, which were associated with improvement in 33 (65%) . The mean dose of lorazepam was 5.4 (SD 3.6) mg/day. A smaller case series of six adolescents with catatonia found IV lorazepam was associated with improvement in all cases . With regard to ECT, a retrospective study of 39 adolescents who received ECT, of whom 17 had catatonia, found that 92% of those with catatonia responded . In literature reviews, the underlying evidence base is largely case reports and series. In one such review of 59 cases with a range of underlying disorders, at least 45 out of 59 (76%) improved after ECT . Another review identified 24 patients with catatonia who had outcome data after ECT, of whom 18 (75%) showed remission or marked improvement . The evidence suggests that side effects of ECT are similar to the adult population and serious complications are very rare . Since 2008, several reviews have proposed paediatric catatonia management along the lines of objective catatonia rating scales, medical work-up, removal of offending drugs and lorazepam challenge, followed by lorazepam treatment (sometimes at high doses) and or ECT (Dhossche et al., 2010a; ; ; ; ; ). Recommendations for catatonia in the children and adolescents Catatonia is known to occur in children as young as 5 years and clinicians should screen for catatonia whenever clinical suspicion exists. (S) Evaluation of catatonia aetiologies in children and adolescents should include the same range of disorders as found in adults. (S) When assessing for the presence of paediatric catatonia, the PCRS should be used. (C) First-line management for paediatric catatonia includes a lorazepam challenge test, lorazepam in increasing doses and bilateral ECT. (D) Older adults The literature on catatonia in the older adult population is limited compared to that in the working-age adult population. As with adult patients, it can be transient or long lasting, varying from weeks to months or years . The studies that have assessed the epidemiology of catatonia among older adults have focused on the acute psychiatric hospital setting, liaison psychiatry setting and intensive care setting . The prevalence has varied widely by the study setting and the assessment instruments used. The phenotype of catatonia among older adults shows a high prevalence of hypokinetic signs, such as immobility/stupor, staring, rigidity, mutism, withdrawal, posturing and negativism , although one study listed excitement among the commonly identified clinical features . Catatonia among older adults is often multifactorial in aetiology and a wide range of medical conditions has been implicated, though the outcome is still usually good if it is treated promptly . Differential diagnosis can be challenging and misdiagnosis of catatonia as delirium, psychosis, stroke, dementia or coma have been reported . This may result in inappropriate ‘do not resuscitate’ orders . Reports suggest that medical complications of catatonia, such as deep vein thrombosis, pulmonary embolism and pneumonia may be particular risks in older adults . One small study found that 4 out of 10 older adult patients with catatonia in a liaison psychiatry setting had medical complications and 2 died . Benzodiazepines remain the cornerstone of the treatment of catatonia among older adults, although they may respond to lower doses . ECT remains the treatment of choice among those not responding to benzodiazepines . Case reports suggest that methylphenidate and zolpidem may also be effective in managing catatonia in older adults. Case reports further suggest a possible beneficial effect of medications including amantadine, memantine, valproate, carbamazepine, topiramate, bromocriptine, propofol, biperiden, bupropion, olanzapine, lithium and tramadol. However, reports are mixed for amantadine, valproate or carbamazepine. Case reports have also reported the beneficial effect of rTMS and tDCS in the management of catatonia . Recommendations for catatonia in older adults In older adults, care should be taken to identify medical disorders underlying catatonia. (S) Catatonia should be considered in the differential diagnosis for an apparent rapidly progressive dementia or ‘failure to thrive’ clinical presentations in older adults. (S) First-line treatment of catatonia in the older adults consists of benzodiazepines, often at lower doses than among younger adults, and ECT. (D) The perinatal period The only systematic study of catatonia in the perinatal period is a retrospective chart review of 200 women consecutively admitted to hospital with postpartum psychosis, which suggests that the condition may be prevalent in women with severe mental illness in the postnatal period: 40 women (20%) were assessed as having catatonic signs . The literature in other perinatal groups with psychiatric or medical illnesses does not allow prevalence estimates . In pregnancy, many potential complications of persistent catatonia place the mother and child at exceptionally high risk. These include venous thrombosis and thromboembolism, dehydration, malnutrition, incontinence, infections, communication difficulties, impaired co-operation with assessments and investigations, and impairment of capacity . Postnatally, the mother’s ability to breastfeed and to care and bond with her infant are key concerns . The sections that follow describe the risks that the two main treatments for catatonia, lorazepam and ECT, may pose to the mother and the child. More details about the general principles of use of psychotropic medications in the perinatal period may be found in the BAP guidance on this topic The reproductive safety of lorazepam in the perinatal period Research on the reproductive safety of benzodiazepines remains at an early stage, and studies more typically evaluate benzodiazepines as a group, rather than individual agents. In a meta-analysis of cohort studies of exposure to benzodiazepines, found a trend towards increased risks for total ( n = 5195) and cardiovascular ( n = 4414) malformations with the lower end of the 95% CI nearly achieving significance. reported in a nationwide cohort study of 3.1 million pregnancies with a larger sample of benzodiazepine exposures ( n = 40,846), using propensity scores to account for a large number of potential confounders and several sensitivity analyses, that first trimester exposure to benzodiazepines was associated with a very small increased risk of overall congenital malformations (adjusted relative risk (aRR): 1.09; 95% CI: 1.05–1.13) and specifically, heart defects (adjusted RR: 1.15; 95% CI: 1.10–1.21). A risk of oral clefts, reported by several previous studies , was not confirmed. There were differences between compounds and lorazepam was not associated with significant effects (aRR for overall congenital malformations 1.00, CI: 0.85–1.18; aRR for cardiovascular malformations 1.14, CI: 0.93–1.40). A systematic review and meta-analysis of prospective studies found that benzodiazepine exposure in pregnancy was associated with increased risks of spontaneous abortion, preterm birth, low birthweight and low Apgar scores with odds ratios of approximately two , a value generally regarded as the threshold for clinical significance . These outcomes are determined by other risk factors, many of them associated with mental disorders and difficult to capture from obstetric databases. Therefore, research findings in this area are known to be difficult to interpret and prone to overestimates. The authors highlight this risk of confounding as well as significant heterogeneity in the populations across the included studies. However, the risk of neonatal intensive care unit admission (2.61; CI: 1.64–4.14) was consistently increased and is likely to be related to neonatal benzodiazepine withdrawal. Cohort studies of neurodevelopmental outcomes following foetal benzodiazepine exposure have been inconclusive . A small number of studies suggest that a fully breastfed infant ingests very small amounts of the maternal lorazepam dose (Drugs and Lactation Database (LactMed), 2022; ), 2022). Clinical observations of infants are scarce but do not report infant sedation or other serious adverse effects following maternal doses within the licensed range (Drugs and Lactation Database (LactMed), 2022; ), but there is a lack of data regarding the effects of the high doses of lorazepam sometimes used in catatonia. The use of ECT in the perinatal period In systematic reviews of the case literature on ECT in the perinatal period , summarised by , the most common adverse effects attributed to the treatment were foetal bradyarrhythmia, abdominal pain, uterine contractions, premature birth, vaginal bleeding, placental abruption and threatened abortion. In many cases, symptoms were mild and transient . No maternal deaths were reported. Among 339 cases summarised by , 11 foetal or neonatal deaths were reported, one of which was attributed to the treatment: it occurred in the context of maternal status epilepticus following three successive stimuli administered during ECT. found a high rate of complications in their systematic review of case reports and series, including 12 foetal and neonatal deaths among 169 cases. However, the authors did not state whether these outcomes were caused or thought to be caused by ECT. This review included all adverse maternal and foetal outcomes among complications of ECT even if they were highly unlikely to be related to the treatment, such as, for example, anencephaly and other congenital anomalies. This approach led the authors to call for great caution when considering the use of ECT in pregnancy. The authors of the other four systematic reviews – while acknowledging the difficulties with interpreting case literature – concluded that ECT is an effective treatment for severe mental illness during pregnancy and that the risks to mother and foetus are relatively low. This view is shared by publications of the Royal College of Psychiatrists , the APA , and the Royal Australian and New Zealand College of Psychiatrists . To achieve the optimal outcomes for mother and child, it is important that professionals with expertise in ECT, perinatal psychiatry and obstetrics are involved in a decision to deliver ECT during pregnancy . It is essential that clinicians identify pre-existing risk factors for poor outcomes, appropriately monitor maternal and foetal well-being before, during and after the procedure, and utilise effective preventative interventions. The location and team composition for conducting the ECT and what measures should be taken before, during and after the procedure to prevent maternal and foetal complications depend on the stage of pregnancy . There is evidence from three observational studies that ECT is more effective for women with severe affective disorders after childbirth than for non-postnatal patients . The short half-lives of medication used for anaesthesia and muscle relaxation during ECT mean that women should not be prevented from resuming breastfeeding after treatments. Due to inherent methodological difficulties, considerable uncertainties exist in the evidence, and research findings should be interpreted with caution. Recommendations for catatonia in the perinatal period If catatonia is severe and the woman suffers from a mental illness, the psychiatric and obstetric team should make a joint decision as to which inpatient setting is most appropriate for treatment. Contact between the mother and baby should be encouraged as much as is possible and appropriate. Psychiatric care should be provided by a psychiatrist experienced in the management of perinatal mental illness. (S) If catatonia is severe and presents high risks to the physical health of the mother and child, and treatment of the underlying condition has been ineffective or would lead to an unacceptable delay, specific treatment for catatonia should be considered. (S) The risks of any specific treatment should be carefully weighed against the risks of other treatments or no treatments. (S) Recommendations for catatonia during pregnancy Screening and selection of patients for ECT should be conducted by a psychiatrist experienced in ECT, in consultation with both a psychiatrist with appropriate expertise in perinatal psychiatry and an obstetrician. (D) If delivery is expected within a few weeks, alternative options, such as induction of labour or Caesarean section should be considered by the obstetrician, anaesthetist, paediatrician and psychiatrist. (S) If specific treatment for catatonia is required, lorazepam at doses up to 4 mg/day should be considered initially. (S) If lorazepam is not effective at up to 4 mg/day, and the risks to the health of the mother and/or the child are high, the use of ECT can be considered (S) Recommendations for catatonia during breastfeeding If treatment with lorazepam at doses higher than 4 mg/day is used, the mother should not breastfeed because of a lack of evidence of its safety. If possible and appropriate, lactation can be maintained during the period of high lorazepam dosing by expressing and discarding milk. (S) Women can resume breastfeeding after ECT treatments. (C) Autism spectrum disorder International studies over the past two decades have documented a point prevalence of catatonia ranging from 12% to 20% in individuals with autism, with onset most commonly in adolescence and early adulthood . As the US Center for Disease Control estimated an incidence of autism as 1 in 44 children, it is likely that clinicians will care for individuals with autism and catatonia . It is theorised that shared neuronal circuitry and genetic susceptibility loci exist between autism and catatonia . Catatonia often encompasses the full range of psychomotor retarded and agitated clinical features in autism, and the latter may include dangerous repetitive self-injury with high risk for severe bodily harm . Diagnosis of catatonia in autism is complicated by the overlap in clinical features between the two conditions . Therefore, several authors have suggested that diagnosis of catatonia in autism should entail a marked change from baseline presentation . This is important because no pharmacological or neuromodulatory therapies are indicated for the core symptoms of autism . Treatment paradigms are based on case reports and series, as well as international clinical experience. The first blueprints for treatment of catatonia in autism were published by and begin with standardised assessment of catatonia, taking into consideration baseline autistic features that may mimic catatonia . emphasised that amotivation, prompt dependence, withdrawal and slowness often accompany classic DSM catatonia signs in autism, and consideration of catatonia is urged for any change in activity level, self-care or skill. After a catatonia diagnosis and evaluation for underlying medical disorders, clinical features are to be classified as mild, moderate or severe, drawing a clear distinction between impairments such as slowness throughout the day versus immobility, stupor and food refusal. Mild catatonic features may be addressed by the Shah–Wing approach of psychological and supportive interventions with a focus on prompting, structure and stress reduction, and possible lorazepam usage. More severe presentations should be treated with the standard biological anticatatonic regimens including bilateral ECT . Fink, Taylor and Ghaziuddin offered a medical treatment model in 2006 including catatonia diagnosis with standardised rating scales including the BFCRS, lorazepam trial and ongoing therapy, and bilateral ECT as needed . In a case series of 22 individuals with catatonia and autism, Wachtel further discussed limited response to benzodiazepines as well optimisation of ECT response, adequate hydration, pre-treatment hyperventilation and limited usage of anaesthetic agents that interfere with seizure threshold . A 2021 systematic review of 12 studies encompassing 969 individuals with autism and catatonia, also noted a lack of clear response to benzodiazepines, which often had to be discontinued due to side effects. This stands in contrast to the overall benefit of benzodiazepines in catatonia in general and is consistent with other reports where ECT was implemented after failed benzodiazepine trials . The authors also noted that antipsychotics were often used in individuals with catatonia and autism despite a lack of known benefit of such agents in catatonia in general, and urged caution given the risk of worsening catatonia or precipitating its malignant form. Finally, for those patients with autism who require ECT, multiple reports suggest that maintenance ECT may be necessary indefinitely after an index course . Recommendations for catatonia in autism spectrum disorder Clinical vigilance is warranted for the assessment of catatonia in autism spectrum disorder given its high prevalence. (C) Diagnosis of catatonia in autism spectrum disorder requires a marked change from baseline presentation. (S) First-line interventions in mild cases of catatonia are psychological interventions and/or lorazepam, but the standard treatments for catatonia (i.e. benzodiazepines in escalating dosages and/or bilateral ECT) should be considered in moderate to severe cases. (D) Medical conditions Considerations in kidney disease Catatonia, including malignant catatonia , has been described in the context of severe renal impairment , in patients receiving dialysis and in the post-transplantation period, often as a result of drug toxicities . Patients with renal impairment, even those on dialysis, may still be able to tolerate and benefit from benzodiazepines with consideration of the severity of renal impairment, route of administration of benzodiazepines, comorbidities (e.g. frailty), including the risk for delirium. Typically, no dose adjustments are required even in severe impairment for acute dosing of lorazepam in either oral or parenteral formulation; however, for high or repeated parenteral dosing , monitoring for propylene glycol toxicity and consideration of other therapies such as ECT and NMDA receptor antagonists may be indicated to lessen the impact of the potential side effects of treatment (e.g. falls, confusion, delirium). Considerations in liver disease Malignant catatonia may be a rare cause of liver failure . Catatonia has been reported secondary to Wilson’s disease , after liver transplantation , including in post-transplantation delirium as well as secondary to post-transplantation drug toxicities . The early post-liver transplantation period may be a state of deficiency in GABA signalling , which may place the patient at increased risk for catatonia. Benzodiazepines may be an effective treatment for catatonia post-transplantation . In mild to moderate hepatic impairment, typically no dose adjustment for lorazepam is required (oral or parenteral formulations). In severe impairment or failure, use caution . Other treatments such as NMDA receptor antagonists or ECT may be required when benzodiazepine treatment is cautioned. Considerations in lung disease Pulmonary complications of catatonia may include pulmonary embolism, aspiration pneumonia, pneumothorax, bronchorrhoea, central hypoventilation, respiratory failure and delayed weaning from mechanical ventilation ( ; ; ; ter ; ). Catatonia has been described in the context of respiratory illnesses, including influenza and SARS-CoV-2 , as well as in critical illnesses (e.g. sepsis, shock). Catatonia in the context of critical illness including respiratory failure may have high comorbidity with delirium . Respiratory failure due to malignant catatonia has been described and may be especially responsive to ECT ( ; ; ; ter ; ), especially in those unable to tolerate a benzodiazepine . Recommendations for catatonia in kidney, liver and lung disease In renal impairment, lorazepam dosing does not usually need to be altered, but consider additional monitoring for side effects. (C) In mild or moderate hepatic impairment, lorazepam dosing does not usually need to be altered, but caution should be exercised when considering lorazepam in severe hepatic impairment. (B) In severe respiratory disease, consider giving ECT as a first-line treatment rather than benzodiazepines. (D) The prevalence of catatonia in modern child psychiatry has a wide range from 0.6% to 17%; the lowest prevalence being reported in a French study of adolescents, and the highest in a UK study of young people with autism . An Indian study reported an inpatient paediatric prevalence of 5.5% . Most cases appear to occur in adolescence: the 119 paediatric cases in a large cohort study had a mean age of 14.6 (SD: 2.7) years, although age ranged from 5 to 17 years . The potential aetiologies of catatonia in youth span the same psychiatric and medical categories as adults, and while affective and psychotic processes are most commonly found, appropriate assessment of other potential aetiologies, as detailed in section ‘Clinical assessment’, are indicated based on clinical history, evaluation and examination. In recent years, paediatric catatonia has been increasingly recognised in anti-NMDA receptor encephalitis, being found in over a third of affected children in one study . The Pediatric Catatonia Rating Scale (PCRS) is modified from the BFCRS and has been shown to be applicable in young people . Given the paucity of evidence in children and adolescents, current treatment paradigms are based on those recommended for adults, combined with case reports, case series and international clinical experience. This seems reasonable, particularly where cases occur among adolescents. In terms of benzodiazepines, in a case series of 66 children and adolescents who were hospitalised for catatonia, 51 received benzodiazepines, which were associated with improvement in 33 (65%) . The mean dose of lorazepam was 5.4 (SD 3.6) mg/day. A smaller case series of six adolescents with catatonia found IV lorazepam was associated with improvement in all cases . With regard to ECT, a retrospective study of 39 adolescents who received ECT, of whom 17 had catatonia, found that 92% of those with catatonia responded . In literature reviews, the underlying evidence base is largely case reports and series. In one such review of 59 cases with a range of underlying disorders, at least 45 out of 59 (76%) improved after ECT . Another review identified 24 patients with catatonia who had outcome data after ECT, of whom 18 (75%) showed remission or marked improvement . The evidence suggests that side effects of ECT are similar to the adult population and serious complications are very rare . Since 2008, several reviews have proposed paediatric catatonia management along the lines of objective catatonia rating scales, medical work-up, removal of offending drugs and lorazepam challenge, followed by lorazepam treatment (sometimes at high doses) and or ECT (Dhossche et al., 2010a; ; ; ; ; ). Recommendations for catatonia in the children and adolescents Catatonia is known to occur in children as young as 5 years and clinicians should screen for catatonia whenever clinical suspicion exists. (S) Evaluation of catatonia aetiologies in children and adolescents should include the same range of disorders as found in adults. (S) When assessing for the presence of paediatric catatonia, the PCRS should be used. (C) First-line management for paediatric catatonia includes a lorazepam challenge test, lorazepam in increasing doses and bilateral ECT. (D) The literature on catatonia in the older adult population is limited compared to that in the working-age adult population. As with adult patients, it can be transient or long lasting, varying from weeks to months or years . The studies that have assessed the epidemiology of catatonia among older adults have focused on the acute psychiatric hospital setting, liaison psychiatry setting and intensive care setting . The prevalence has varied widely by the study setting and the assessment instruments used. The phenotype of catatonia among older adults shows a high prevalence of hypokinetic signs, such as immobility/stupor, staring, rigidity, mutism, withdrawal, posturing and negativism , although one study listed excitement among the commonly identified clinical features . Catatonia among older adults is often multifactorial in aetiology and a wide range of medical conditions has been implicated, though the outcome is still usually good if it is treated promptly . Differential diagnosis can be challenging and misdiagnosis of catatonia as delirium, psychosis, stroke, dementia or coma have been reported . This may result in inappropriate ‘do not resuscitate’ orders . Reports suggest that medical complications of catatonia, such as deep vein thrombosis, pulmonary embolism and pneumonia may be particular risks in older adults . One small study found that 4 out of 10 older adult patients with catatonia in a liaison psychiatry setting had medical complications and 2 died . Benzodiazepines remain the cornerstone of the treatment of catatonia among older adults, although they may respond to lower doses . ECT remains the treatment of choice among those not responding to benzodiazepines . Case reports suggest that methylphenidate and zolpidem may also be effective in managing catatonia in older adults. Case reports further suggest a possible beneficial effect of medications including amantadine, memantine, valproate, carbamazepine, topiramate, bromocriptine, propofol, biperiden, bupropion, olanzapine, lithium and tramadol. However, reports are mixed for amantadine, valproate or carbamazepine. Case reports have also reported the beneficial effect of rTMS and tDCS in the management of catatonia . Recommendations for catatonia in older adults In older adults, care should be taken to identify medical disorders underlying catatonia. (S) Catatonia should be considered in the differential diagnosis for an apparent rapidly progressive dementia or ‘failure to thrive’ clinical presentations in older adults. (S) First-line treatment of catatonia in the older adults consists of benzodiazepines, often at lower doses than among younger adults, and ECT. (D) The only systematic study of catatonia in the perinatal period is a retrospective chart review of 200 women consecutively admitted to hospital with postpartum psychosis, which suggests that the condition may be prevalent in women with severe mental illness in the postnatal period: 40 women (20%) were assessed as having catatonic signs . The literature in other perinatal groups with psychiatric or medical illnesses does not allow prevalence estimates . In pregnancy, many potential complications of persistent catatonia place the mother and child at exceptionally high risk. These include venous thrombosis and thromboembolism, dehydration, malnutrition, incontinence, infections, communication difficulties, impaired co-operation with assessments and investigations, and impairment of capacity . Postnatally, the mother’s ability to breastfeed and to care and bond with her infant are key concerns . The sections that follow describe the risks that the two main treatments for catatonia, lorazepam and ECT, may pose to the mother and the child. More details about the general principles of use of psychotropic medications in the perinatal period may be found in the BAP guidance on this topic The reproductive safety of lorazepam in the perinatal period Research on the reproductive safety of benzodiazepines remains at an early stage, and studies more typically evaluate benzodiazepines as a group, rather than individual agents. In a meta-analysis of cohort studies of exposure to benzodiazepines, found a trend towards increased risks for total ( n = 5195) and cardiovascular ( n = 4414) malformations with the lower end of the 95% CI nearly achieving significance. reported in a nationwide cohort study of 3.1 million pregnancies with a larger sample of benzodiazepine exposures ( n = 40,846), using propensity scores to account for a large number of potential confounders and several sensitivity analyses, that first trimester exposure to benzodiazepines was associated with a very small increased risk of overall congenital malformations (adjusted relative risk (aRR): 1.09; 95% CI: 1.05–1.13) and specifically, heart defects (adjusted RR: 1.15; 95% CI: 1.10–1.21). A risk of oral clefts, reported by several previous studies , was not confirmed. There were differences between compounds and lorazepam was not associated with significant effects (aRR for overall congenital malformations 1.00, CI: 0.85–1.18; aRR for cardiovascular malformations 1.14, CI: 0.93–1.40). A systematic review and meta-analysis of prospective studies found that benzodiazepine exposure in pregnancy was associated with increased risks of spontaneous abortion, preterm birth, low birthweight and low Apgar scores with odds ratios of approximately two , a value generally regarded as the threshold for clinical significance . These outcomes are determined by other risk factors, many of them associated with mental disorders and difficult to capture from obstetric databases. Therefore, research findings in this area are known to be difficult to interpret and prone to overestimates. The authors highlight this risk of confounding as well as significant heterogeneity in the populations across the included studies. However, the risk of neonatal intensive care unit admission (2.61; CI: 1.64–4.14) was consistently increased and is likely to be related to neonatal benzodiazepine withdrawal. Cohort studies of neurodevelopmental outcomes following foetal benzodiazepine exposure have been inconclusive . A small number of studies suggest that a fully breastfed infant ingests very small amounts of the maternal lorazepam dose (Drugs and Lactation Database (LactMed), 2022; ), 2022). Clinical observations of infants are scarce but do not report infant sedation or other serious adverse effects following maternal doses within the licensed range (Drugs and Lactation Database (LactMed), 2022; ), but there is a lack of data regarding the effects of the high doses of lorazepam sometimes used in catatonia. Research on the reproductive safety of benzodiazepines remains at an early stage, and studies more typically evaluate benzodiazepines as a group, rather than individual agents. In a meta-analysis of cohort studies of exposure to benzodiazepines, found a trend towards increased risks for total ( n = 5195) and cardiovascular ( n = 4414) malformations with the lower end of the 95% CI nearly achieving significance. reported in a nationwide cohort study of 3.1 million pregnancies with a larger sample of benzodiazepine exposures ( n = 40,846), using propensity scores to account for a large number of potential confounders and several sensitivity analyses, that first trimester exposure to benzodiazepines was associated with a very small increased risk of overall congenital malformations (adjusted relative risk (aRR): 1.09; 95% CI: 1.05–1.13) and specifically, heart defects (adjusted RR: 1.15; 95% CI: 1.10–1.21). A risk of oral clefts, reported by several previous studies , was not confirmed. There were differences between compounds and lorazepam was not associated with significant effects (aRR for overall congenital malformations 1.00, CI: 0.85–1.18; aRR for cardiovascular malformations 1.14, CI: 0.93–1.40). A systematic review and meta-analysis of prospective studies found that benzodiazepine exposure in pregnancy was associated with increased risks of spontaneous abortion, preterm birth, low birthweight and low Apgar scores with odds ratios of approximately two , a value generally regarded as the threshold for clinical significance . These outcomes are determined by other risk factors, many of them associated with mental disorders and difficult to capture from obstetric databases. Therefore, research findings in this area are known to be difficult to interpret and prone to overestimates. The authors highlight this risk of confounding as well as significant heterogeneity in the populations across the included studies. However, the risk of neonatal intensive care unit admission (2.61; CI: 1.64–4.14) was consistently increased and is likely to be related to neonatal benzodiazepine withdrawal. Cohort studies of neurodevelopmental outcomes following foetal benzodiazepine exposure have been inconclusive . A small number of studies suggest that a fully breastfed infant ingests very small amounts of the maternal lorazepam dose (Drugs and Lactation Database (LactMed), 2022; ), 2022). Clinical observations of infants are scarce but do not report infant sedation or other serious adverse effects following maternal doses within the licensed range (Drugs and Lactation Database (LactMed), 2022; ), but there is a lack of data regarding the effects of the high doses of lorazepam sometimes used in catatonia. In systematic reviews of the case literature on ECT in the perinatal period , summarised by , the most common adverse effects attributed to the treatment were foetal bradyarrhythmia, abdominal pain, uterine contractions, premature birth, vaginal bleeding, placental abruption and threatened abortion. In many cases, symptoms were mild and transient . No maternal deaths were reported. Among 339 cases summarised by , 11 foetal or neonatal deaths were reported, one of which was attributed to the treatment: it occurred in the context of maternal status epilepticus following three successive stimuli administered during ECT. found a high rate of complications in their systematic review of case reports and series, including 12 foetal and neonatal deaths among 169 cases. However, the authors did not state whether these outcomes were caused or thought to be caused by ECT. This review included all adverse maternal and foetal outcomes among complications of ECT even if they were highly unlikely to be related to the treatment, such as, for example, anencephaly and other congenital anomalies. This approach led the authors to call for great caution when considering the use of ECT in pregnancy. The authors of the other four systematic reviews – while acknowledging the difficulties with interpreting case literature – concluded that ECT is an effective treatment for severe mental illness during pregnancy and that the risks to mother and foetus are relatively low. This view is shared by publications of the Royal College of Psychiatrists , the APA , and the Royal Australian and New Zealand College of Psychiatrists . To achieve the optimal outcomes for mother and child, it is important that professionals with expertise in ECT, perinatal psychiatry and obstetrics are involved in a decision to deliver ECT during pregnancy . It is essential that clinicians identify pre-existing risk factors for poor outcomes, appropriately monitor maternal and foetal well-being before, during and after the procedure, and utilise effective preventative interventions. The location and team composition for conducting the ECT and what measures should be taken before, during and after the procedure to prevent maternal and foetal complications depend on the stage of pregnancy . There is evidence from three observational studies that ECT is more effective for women with severe affective disorders after childbirth than for non-postnatal patients . The short half-lives of medication used for anaesthesia and muscle relaxation during ECT mean that women should not be prevented from resuming breastfeeding after treatments. Due to inherent methodological difficulties, considerable uncertainties exist in the evidence, and research findings should be interpreted with caution. Recommendations for catatonia in the perinatal period If catatonia is severe and the woman suffers from a mental illness, the psychiatric and obstetric team should make a joint decision as to which inpatient setting is most appropriate for treatment. Contact between the mother and baby should be encouraged as much as is possible and appropriate. Psychiatric care should be provided by a psychiatrist experienced in the management of perinatal mental illness. (S) If catatonia is severe and presents high risks to the physical health of the mother and child, and treatment of the underlying condition has been ineffective or would lead to an unacceptable delay, specific treatment for catatonia should be considered. (S) The risks of any specific treatment should be carefully weighed against the risks of other treatments or no treatments. (S) Recommendations for catatonia during pregnancy Screening and selection of patients for ECT should be conducted by a psychiatrist experienced in ECT, in consultation with both a psychiatrist with appropriate expertise in perinatal psychiatry and an obstetrician. (D) If delivery is expected within a few weeks, alternative options, such as induction of labour or Caesarean section should be considered by the obstetrician, anaesthetist, paediatrician and psychiatrist. (S) If specific treatment for catatonia is required, lorazepam at doses up to 4 mg/day should be considered initially. (S) If lorazepam is not effective at up to 4 mg/day, and the risks to the health of the mother and/or the child are high, the use of ECT can be considered (S) Recommendations for catatonia during breastfeeding If treatment with lorazepam at doses higher than 4 mg/day is used, the mother should not breastfeed because of a lack of evidence of its safety. If possible and appropriate, lactation can be maintained during the period of high lorazepam dosing by expressing and discarding milk. (S) Women can resume breastfeeding after ECT treatments. (C) International studies over the past two decades have documented a point prevalence of catatonia ranging from 12% to 20% in individuals with autism, with onset most commonly in adolescence and early adulthood . As the US Center for Disease Control estimated an incidence of autism as 1 in 44 children, it is likely that clinicians will care for individuals with autism and catatonia . It is theorised that shared neuronal circuitry and genetic susceptibility loci exist between autism and catatonia . Catatonia often encompasses the full range of psychomotor retarded and agitated clinical features in autism, and the latter may include dangerous repetitive self-injury with high risk for severe bodily harm . Diagnosis of catatonia in autism is complicated by the overlap in clinical features between the two conditions . Therefore, several authors have suggested that diagnosis of catatonia in autism should entail a marked change from baseline presentation . This is important because no pharmacological or neuromodulatory therapies are indicated for the core symptoms of autism . Treatment paradigms are based on case reports and series, as well as international clinical experience. The first blueprints for treatment of catatonia in autism were published by and begin with standardised assessment of catatonia, taking into consideration baseline autistic features that may mimic catatonia . emphasised that amotivation, prompt dependence, withdrawal and slowness often accompany classic DSM catatonia signs in autism, and consideration of catatonia is urged for any change in activity level, self-care or skill. After a catatonia diagnosis and evaluation for underlying medical disorders, clinical features are to be classified as mild, moderate or severe, drawing a clear distinction between impairments such as slowness throughout the day versus immobility, stupor and food refusal. Mild catatonic features may be addressed by the Shah–Wing approach of psychological and supportive interventions with a focus on prompting, structure and stress reduction, and possible lorazepam usage. More severe presentations should be treated with the standard biological anticatatonic regimens including bilateral ECT . Fink, Taylor and Ghaziuddin offered a medical treatment model in 2006 including catatonia diagnosis with standardised rating scales including the BFCRS, lorazepam trial and ongoing therapy, and bilateral ECT as needed . In a case series of 22 individuals with catatonia and autism, Wachtel further discussed limited response to benzodiazepines as well optimisation of ECT response, adequate hydration, pre-treatment hyperventilation and limited usage of anaesthetic agents that interfere with seizure threshold . A 2021 systematic review of 12 studies encompassing 969 individuals with autism and catatonia, also noted a lack of clear response to benzodiazepines, which often had to be discontinued due to side effects. This stands in contrast to the overall benefit of benzodiazepines in catatonia in general and is consistent with other reports where ECT was implemented after failed benzodiazepine trials . The authors also noted that antipsychotics were often used in individuals with catatonia and autism despite a lack of known benefit of such agents in catatonia in general, and urged caution given the risk of worsening catatonia or precipitating its malignant form. Finally, for those patients with autism who require ECT, multiple reports suggest that maintenance ECT may be necessary indefinitely after an index course . Recommendations for catatonia in autism spectrum disorder Clinical vigilance is warranted for the assessment of catatonia in autism spectrum disorder given its high prevalence. (C) Diagnosis of catatonia in autism spectrum disorder requires a marked change from baseline presentation. (S) First-line interventions in mild cases of catatonia are psychological interventions and/or lorazepam, but the standard treatments for catatonia (i.e. benzodiazepines in escalating dosages and/or bilateral ECT) should be considered in moderate to severe cases. (D) Considerations in kidney disease Catatonia, including malignant catatonia , has been described in the context of severe renal impairment , in patients receiving dialysis and in the post-transplantation period, often as a result of drug toxicities . Patients with renal impairment, even those on dialysis, may still be able to tolerate and benefit from benzodiazepines with consideration of the severity of renal impairment, route of administration of benzodiazepines, comorbidities (e.g. frailty), including the risk for delirium. Typically, no dose adjustments are required even in severe impairment for acute dosing of lorazepam in either oral or parenteral formulation; however, for high or repeated parenteral dosing , monitoring for propylene glycol toxicity and consideration of other therapies such as ECT and NMDA receptor antagonists may be indicated to lessen the impact of the potential side effects of treatment (e.g. falls, confusion, delirium). Considerations in liver disease Malignant catatonia may be a rare cause of liver failure . Catatonia has been reported secondary to Wilson’s disease , after liver transplantation , including in post-transplantation delirium as well as secondary to post-transplantation drug toxicities . The early post-liver transplantation period may be a state of deficiency in GABA signalling , which may place the patient at increased risk for catatonia. Benzodiazepines may be an effective treatment for catatonia post-transplantation . In mild to moderate hepatic impairment, typically no dose adjustment for lorazepam is required (oral or parenteral formulations). In severe impairment or failure, use caution . Other treatments such as NMDA receptor antagonists or ECT may be required when benzodiazepine treatment is cautioned. Considerations in lung disease Pulmonary complications of catatonia may include pulmonary embolism, aspiration pneumonia, pneumothorax, bronchorrhoea, central hypoventilation, respiratory failure and delayed weaning from mechanical ventilation ( ; ; ; ter ; ). Catatonia has been described in the context of respiratory illnesses, including influenza and SARS-CoV-2 , as well as in critical illnesses (e.g. sepsis, shock). Catatonia in the context of critical illness including respiratory failure may have high comorbidity with delirium . Respiratory failure due to malignant catatonia has been described and may be especially responsive to ECT ( ; ; ; ter ; ), especially in those unable to tolerate a benzodiazepine . Recommendations for catatonia in kidney, liver and lung disease In renal impairment, lorazepam dosing does not usually need to be altered, but consider additional monitoring for side effects. (C) In mild or moderate hepatic impairment, lorazepam dosing does not usually need to be altered, but caution should be exercised when considering lorazepam in severe hepatic impairment. (B) In severe respiratory disease, consider giving ECT as a first-line treatment rather than benzodiazepines. (D) Catatonia, including malignant catatonia , has been described in the context of severe renal impairment , in patients receiving dialysis and in the post-transplantation period, often as a result of drug toxicities . Patients with renal impairment, even those on dialysis, may still be able to tolerate and benefit from benzodiazepines with consideration of the severity of renal impairment, route of administration of benzodiazepines, comorbidities (e.g. frailty), including the risk for delirium. Typically, no dose adjustments are required even in severe impairment for acute dosing of lorazepam in either oral or parenteral formulation; however, for high or repeated parenteral dosing , monitoring for propylene glycol toxicity and consideration of other therapies such as ECT and NMDA receptor antagonists may be indicated to lessen the impact of the potential side effects of treatment (e.g. falls, confusion, delirium). Malignant catatonia may be a rare cause of liver failure . Catatonia has been reported secondary to Wilson’s disease , after liver transplantation , including in post-transplantation delirium as well as secondary to post-transplantation drug toxicities . The early post-liver transplantation period may be a state of deficiency in GABA signalling , which may place the patient at increased risk for catatonia. Benzodiazepines may be an effective treatment for catatonia post-transplantation . In mild to moderate hepatic impairment, typically no dose adjustment for lorazepam is required (oral or parenteral formulations). In severe impairment or failure, use caution . Other treatments such as NMDA receptor antagonists or ECT may be required when benzodiazepine treatment is cautioned. Pulmonary complications of catatonia may include pulmonary embolism, aspiration pneumonia, pneumothorax, bronchorrhoea, central hypoventilation, respiratory failure and delayed weaning from mechanical ventilation ( ; ; ; ter ; ). Catatonia has been described in the context of respiratory illnesses, including influenza and SARS-CoV-2 , as well as in critical illnesses (e.g. sepsis, shock). Catatonia in the context of critical illness including respiratory failure may have high comorbidity with delirium . Respiratory failure due to malignant catatonia has been described and may be especially responsive to ECT ( ; ; ; ter ; ), especially in those unable to tolerate a benzodiazepine . Recommendations for catatonia in kidney, liver and lung disease In renal impairment, lorazepam dosing does not usually need to be altered, but consider additional monitoring for side effects. (C) In mild or moderate hepatic impairment, lorazepam dosing does not usually need to be altered, but caution should be exercised when considering lorazepam in severe hepatic impairment. (B) In severe respiratory disease, consider giving ECT as a first-line treatment rather than benzodiazepines. (D) One general point for future research is that there is a need to harmonise definitions of catatonia and definitions of specific catatonic signs, as well as thresholds for making a diagnosis . As this guideline has highlighted, the most urgent research goal for catatonia is to develop a more robust evidence base for its treatment. Despite the wealth of small reports and observational data, a Cochrane systematic review of the use of benzodiazepines for catatonia found that no RCT met its inclusion criteria . Although some might consider an RCT infeasible in catatonia, the Cochrane review found several examples. Unfortunately, though, these studies had methodological issues, introducing questions of validity. Conducting clinical trials in catatonia is an important priority for psychiatric research in the next decade. In the meantime, there is substantial scope to improve the quality of evidence for the treatment of catatonia by using large databases of electronic healthcare records with prescribing data. Additional measures to improve the evidence base would include harmonising the outcomes used in research studies by developing a set of core outcomes. This would facilitate pooling of data across research centres, which is an important tool in researching less common conditions. We provide a list of priority research questions in . sj-docx-1-jop-10.1177_02698811231158232 – Supplemental material for Evidence-based consensus guidelines for the management of catatonia: Recommendations from the British Association for Psychopharmacology Click here for additional data file. Supplemental material, sj-docx-1-jop-10.1177_02698811231158232 for Evidence-based consensus guidelines for the management of catatonia: Recommendations from the British Association for Psychopharmacology by Jonathan P Rogers, Mark A Oldham, Gregory Fricchione, Georg Northoff, Jo Ellen Wilson, Stephan C Mann, Andrew Francis, Angelika Wieck, Lee Elizabeth Wachtel, Glyn Lewis, Sandeep Grover, Dusan Hirjak, Niraj Ahuja, Michael S Zandi, Allan Young, Kevin Fone, Simon Andrews, David Kessler, Tabish Saifee, Siobhan Gee, David S Baldwin and Anthony S David in Journal of Psychopharmacology sj-docx-2-jop-10.1177_02698811231158232 – Supplemental material for Evidence-based consensus guidelines for the management of catatonia: Recommendations from the British Association for Psychopharmacology Click here for additional data file. Supplemental material, sj-docx-2-jop-10.1177_02698811231158232 for Evidence-based consensus guidelines for the management of catatonia: Recommendations from the British Association for Psychopharmacology by Jonathan P Rogers, Mark A Oldham, Gregory Fricchione, Georg Northoff, Jo Ellen Wilson, Stephan C Mann, Andrew Francis, Angelika Wieck, Lee Elizabeth Wachtel, Glyn Lewis, Sandeep Grover, Dusan Hirjak, Niraj Ahuja, Michael S Zandi, Allan Young, Kevin Fone, Simon Andrews, David Kessler, Tabish Saifee, Siobhan Gee, David S Baldwin and Anthony S David in Journal of Psychopharmacology sj-pptx-3-jop-10.1177_02698811231158232 – Supplemental material for Evidence-based consensus guidelines for the management of catatonia: Recommendations from the British Association for Psychopharmacology Click here for additional data file. Supplemental material, sj-pptx-3-jop-10.1177_02698811231158232 for Evidence-based consensus guidelines for the management of catatonia: Recommendations from the British Association for Psychopharmacology by Jonathan P Rogers, Mark A Oldham, Gregory Fricchione, Georg Northoff, Jo Ellen Wilson, Stephan C Mann, Andrew Francis, Angelika Wieck, Lee Elizabeth Wachtel, Glyn Lewis, Sandeep Grover, Dusan Hirjak, Niraj Ahuja, Michael S Zandi, Allan Young, Kevin Fone, Simon Andrews, David Kessler, Tabish Saifee, Siobhan Gee, David S Baldwin and Anthony S David in Journal of Psychopharmacology
|
Exploratory analysis of the suitability of data from the civil registration system for estimating excess mortality due to COVID-19 in Faridabad district of India
|
c1d6ad67-feb5-4691-b8e0-9fffc68f9f29
|
10101362
|
Forensic Medicine[mh]
|
The study was based on the secondary data analysis. The study protocol was approved by the Institutional Ethics Committee, All India Institute of Medical Sciences, New Delhi, India. Initial exploratory analysis was performed using available data from a concurrent study. Later, Municipal Commissioner, Faridabad, was approached formally who provided CRS (civil registration system) data for the estimation of excess deaths. The study was carried out between March to December 2021. Study setting : Faridabad district of Haryana State lies in the National Capital Region with Delhi to its north and Uttar Pradesh to its east. The district has one functioning government medical college hospital and several tertiary care multi-speciality hospitals in the private sector . The 2011 census recorded the population of the district as 1,809,733, with 79.5 per cent residing in urban areas with an estimated population in 2020 of 2.1 million , . In 2019, 11,141 deaths were registered in Faridabad district and the State of Haryana reported 100 per cent coverage of death registration . The district reported over 45,000 cases and 400 COVID-19-related deaths in 2020 with a peak of active and confirmed cases being recorded on November 20, 2020 . A serosurvey in October 2020 found that almost one-third of the population (31.2%) had been exposed to the virus . Data sources : A line list of all deaths between January 1, 2016 and September 30, 2021 registered until November 30, 2021 was obtained from the Registrar of Births and Deaths Offices in Ballabgarh and Badkhal tehsils of Faridabad district. To check for the level of completeness in the registration of deaths in 2020, a sample of 50 deaths each recorded in the registers of two large crematoria and one graveyard (total 150) in the local area were drawn through systematic random sampling and compared with death records obtained from CRS. Data analysis : Data were analyzed using Microsoft Excel 2016 and STATA ® release 15 (StataCorp LLC, College Station, Texas, USA). Three approaches were explored to estimate excess mortality by gender and age groups (0-14, 15-59 and ≥60 yr) in 2020 and 2021 against a baseline estimated from monthly deaths in the corresponding group during 2016-2019. The first approach compared monthly mortality in 2020 and 2021 (pandemic years) against the historical average, using standard error (SE) for the confidence intervals (CIs). In the second approach, expected monthly mortality for 2020 and 2021 was estimated using the FORECAST.ETS function with seasonality set to 1 (automatic calculation) in Microsoft Excel 2016. The function utilizes a triple exponential smoothing method based on historical data and using the square root of deaths as the CI . The total (cumulative) number of excess (observed vs . baseline) all-cause deaths in both approaches was obtained by summing the excess all-cause deaths (with negative values set to 0) in each month. Weekly mortality was plotted by age and sex separately for the years 2020 and 2021 and compared with the range identified by the above two approaches. The third approach was a modification of the simple linear regression method elaborated by Gibertoni et al , with the observed deaths as the dependent variable and an ordinal variable indicating year as the independent variable (coded 1-6 to represent years from 2016 to 2020-21). The constant and slope obtained for each month were used to estimate by extrapolation; the expected number of monthly deaths in 2020 and 2021, using the equation: d (Expected Deaths)=a+(b×4) where, x=5 corresponds to the year 2020-21. Monthly excess mortality (observed minus expected) with negative values set to 0 was summed to arrive at cumulative all-cause excess mortality. Excess deaths were expressed as a range by taking the mean and 95 per cent upper bound of the CI for expected or predicted values. Verbal autopsies (VA) were conducted by trained field workers using a validated instrument for a subset of 585 deaths in the age group of 30-69 yr which occurred in study tehsils as a part of an ongoing study to identify cardiac deaths. To assess the reliability of CoD from CRS, International Classification of Diseases-10 (ICD-10) codes for the underlying CoD based on the VA interview were assigned by the authors . Appropriate ICD-10 codes were also assigned for the CoD recorded in the CRS portal for the same subset. Cohen’s κ was calculated to assess agreement between physicians assigned CoD and that recorded in CRS for major CoD categories. Cohen’s κ ≥0.4 was interpreted as a moderate level of agreement .
A total of 7017 all-cause deaths were registered in Ballabgarh and Badkhal tehsils between January 1, 2020 and December 31, 2020. This was 8.9 per cent higher than the mean annual deaths (6446±147; 95% CI) registered from 2016 to 2019. In 2021, 6792 all-cause deaths were recorded until September 30, representing a 43.8 per cent increase over the mean for the same period (4723±277) ( ). There did not appear to be an increased delay in death registration as compared to previous years. All the 150 deaths identified from crematoria and a graveyard in the study area in 2020 were registered on the CRS portal. A significant increase (19% in 2020 and 56% in 2021) in deaths in the population >60 yr old was seen as compared to the average for 2016-2019, with no sex differences. Figure and depict the week-wise COVID-19-confirmed cases and deaths in the Faridabad district reported in 2020 and 2021 (untill September 30, 2021). A multimodal curve with the peak in wk 48 is seen in while in a steeply sloping unimodal curve peaking at wk 18. A total of 402 COVID-19 deaths were reported in the Faridabad district till December 31, 2020 and a further 314 were reported between January 1, 2021 and September 30, 2021 . The peaks of all-cause deaths observed, correspond temporally and in terms of magnitude to infection surges in the district . Figures , and – depict the week-wise mortality reported in 2020 and 2021 along with the CIs of the historical average by sex and age groups. These show that the highest estimate of excess mortality was in ≥60 yr age group, with little impact on children below 14 yr and that women had parallel though smaller peaks than men throughout the pandemic. Estimates of excess mortality derived by each of the three approaches for age group and sex are shown in . All three approaches gave overlapping estimates except for the 0-14 yr age group. The range of estimates by linear regression was wider than others while that of the historical average was the narrowest. The estimates were highest for the age group >60 yr and higher for men as compared to women though the ranges were overlapping. The forecasting method showed a small excess death estimate range for the 0-14 yr age group, while the other two methods indicated no excess mortality in this age group. Smaller overall number of deaths in this group and an outlier number in 2016 impacted the mean deaths which could have affected the estimates. Assuming that deaths were uniformly distributed in the district, Badkhal and Ballabgarh Registrar’s office would account for 58 per cent (6446/11141) deaths in Faridabad district, or 233 of the 402 COVID-19 deaths in 2020 and 182 of the 314 COVID-19 deaths till September 30, 2021 from District Faridabad would be registered in these two tehsils . This gave us a ratio of estimated excess deaths, directly or indirectly attributable to COVID-19, to officially reported COVID-19 deaths between 1.8-4 for 2020 and 10.9-13.9 for 2021. The agreement between lay-reported and physician-assigned CoD showed moderate agreement for tuberculosis (κ=0.481; SE=0.039, P <0.001) and external injuries (κ=0.453; SE=0.041, P <0.001) ( ). There was poor agreement on all other major CoD categories. The high proportion of garbage codes (cardiac arrest or a mechanism of death like heart failure or sepsis) were recorded in the CRS. None of the 585 sampled deaths from 2020 had COVID-19 recorded as an underlying CoD.
There is consensus that the best estimate of the COVID-19 death toll can be assessed through the excess mortality approach . Our assessment of the appropriateness of CRS data from a district in India for excess mortality estimation for COVID-19 covering both the first and the second wave showed that the number of deaths registered increased in 2020/21, the three commonly used approaches used for excess mortality estimation provided overlapping estimates and the data were not appropriate for cause-specific excess mortality estimation. Pandemic-associated lockdowns and mobility restrictions did not seem to have affected the historically high death registration completeness in Ballabgarh and Badkhal tehsils . Our independent verification further validated this finding. As compared to the mean of the previous four years, about 450 additional deaths were registered in 2020 and the timeliness of death registration improved. Excess mortality estimates arrived at using three approaches overlapped thus indicating the robustness of the methods used. Pandemic-induced changes in registration coverage, if any, would affect the excess mortality estimates for all three approaches. Although a deficiency in registration could be made up by excess deaths, it is believed that this is unlikely in our study and, if present, would have resulted in further underestimation of excess deaths. There is a wide variation in estimated all-cause excess mortality globally depending on the severity of the pandemic. A population-level analysis conducted in England and Wales up to November 20, 2020 estimated 15 per cent excess mortality . Total mortality was higher by 9.9 per cent between March and November 2020 in Israel . Brazil reported 10.7 per cent excess mortality between January to June 2020 . In the United States, which has reported the highest number of COVID-19 deaths worldwide, an excess mortality of 22.9 per cent was reported between March 2020 and January 2021 . On the other end of the spectrum, a mere 0.03-0.72 per cent excess mortality was observed across prefectures in Japan , while New Zealand did not report any excess deaths in 2020 . Our relatively lower excess death estimate was consistent with lower mortality rates reported from India in 2020. A 3-6 times higher mortality in 2021 was corroborated by anecdotal evidence. Consistent with the global trend we also reported higher excess mortality in the elderly (≥60 yr) age group , , . Although COVID-19 affects both sexes equally, there are reports of more men than women dying from COVID-19 globally , . However, mortality risk for COVID-19 has been reported to be higher for women than men in India . The Institute for Health Metrics and Evaluation estimated the ratio of total COVID-19 deaths to reported COVID-19 deaths as 2.96 for India and later increased it to 5-25 for 2021 deaths . An ecological analysis of 22 countries has estimated that deaths attributed to COVID-19 are underestimated by 35 per cent . Another study tracking excess mortality figures across 89 countries reported a figure of 1.56 as a global undercount ratio of COVID-19 deaths . Our estimate of the ratio of total excess deaths to officially reported COVID-19 deaths was between 1.8-4 for 2020 and 10.9-13.9 for 2021. This ratio is dependent on the access to testing as also the completeness of death registration. Our findings suggested that excess mortality figures as estimated from this study were reasonable. It is to be emphasized here that these excess deaths include those directly as well as indirectly attributable to COVID-19. Officially reported COVID-19 deaths are those directly attributable to COVID-19, the deceased either tested positive for COVID-19 or in which a physician certified that the underlying CoD was COVID-19 related. Problems in reporting COVID-19 deaths will be faced in deaths occurring out-of-hospital and indirect COVID-19 deaths such as cardiac or cancer deaths which occurred due to a lack of access to care. The most important challenge in excess death estimation common to all methods is the availability of a baseline or expected deaths against which to compare the observed deaths . When estimating excess deaths, it is recommended that the total for the most disaggregated analyses are obtained and summed. The forecasting option should be resorted to when there are less than four years of historical data. Other approaches including Poisson distribution to model for excess deaths have been used by researchers, and these are likely to give similar results , , , . This exploratory study had several limitations. Being limited to only two blocks ( tehsils ) of one district, it should not be used as an estimate for the country as pandemic intensity and underreporting of deaths would vary widely between States as well as urban and rural areas. Due to the smaller number of recorded deaths per week, we used month as the unit of estimation. Any excess mortality estimate is ultimately dependent for the accuracy on stable population dynamics. Lack of data on possible migration in and out of the district and of recent population denominators and age distribution of the district was the additional limitation. In conclusion, our study indicated the usefulness of the CRS mortality data for estimating all-cause excess mortality due to COVID-19 in India. This approach may be considered for the estimation of excess deaths due to COVID-19 at the district, State or national level, subject to the availability of data.
|
Effects of Chinese provincial CDCs WeChat official account article features on user engagement during the COVID-19 pandemic
|
416d607f-7108-4c0f-85ac-2c38ee694984
|
10101727
|
Health Communication[mh]
|
Study data collection The element and engagement metrics for this study consisted of all articles posted by already existing WOAs of the Chinese provincial CDCs between January 1, 2019, to December 31, 2020. All data collection was conducted from February 10 to July 31, 2021. Provincial CDC WOAs published a total of 26 302 articles between 2019 and 2020, all of which we included in our analyses. All data are publicly available on WeChat. Variables and characteristics Incorporating characteristics relevant to the COVID-19 pandemic, we developed a standardized questionnaire and established the coding norms to extract features based on the previous frame used by Zhang et al. . A frame includes nine characteristics composed of push time, release position, title type, article content, article type, communication skills, marketing elements, article length, and video length. Based on this coding instrument, trained professional interviewers collected data through an online survey. SKC, CXL, and MWW resolved uncertainties regarding the answer choice by discussion following a full assessment. Detailed variable definitions are shown in Table S1 in the . User engagement behaviors WeChat engagement behavior is defined as users reacting to an article, such as reading, loving, and re-sharing. Due to the low level of thumbs-up behavior, we only included two dependent variables – reading and re-sharing level. Reading was defined as “How many people have read the article?” and re-sharing as “Readers rebroadcast them by simply retweeting them to a public space on WeChat, others can read articles being shared by ‘Discover>Top Stories>Wow’”. We gained “reads” or “Wow” options by clicking on “Subscriptions-articles” at the bottom of each article on the WeChat homepage. Statistical analysis Independent variables, article type, communication skills, and marketing elements do not contain mutually exclusive categories in an article, so we reclassified and coded databased on the most common combination (>85%). Because combinations of marketing elements are scattered, we converted characteristics into quantities. The different stages of the COVID-19 pandemic were defined according to the daily cumulative confirmed cases data in China between January 22 to December 31, 2020 . For outcome variables, we used the 75 th percentile as the cut-off point to categorize participants as high or low reading level and re-sharing level because the data were not normally distributed. We generated descriptive statistics for characteristics of WOA articles to determine the frequency of each coding item. We used a χ 2 test to determine the difference in categorical data and filter variables ( P < 0.05). With low reading and re-sharing level category as a reference, we used binary logistic regression analysis to study the association between article characteristics (categorical independent variables) and user engagement to obtain the odd ratios (ORs) and corresponding 95% confidence intervals (CIs). We considered P < 0.05 (two-sided) as statistically significant. Furthermore, we used significant variables in the logistic regression analysis as the final predictors to create a nomogram. We also used a receiver operator characteristic (ROC) curve to evaluate the discriminative performance of the nomogram model . We used bootstrapping to perform internal validation of the original data to predict the accuracy of the nomogram model. The sum of each variable’s total score can be used to estimate the probability of high level of user engagement behaviors. We performed all statistical analyses using SPSS (version 25), Python (version 3.10.4), and R (version 4.2.1). The element and engagement metrics for this study consisted of all articles posted by already existing WOAs of the Chinese provincial CDCs between January 1, 2019, to December 31, 2020. All data collection was conducted from February 10 to July 31, 2021. Provincial CDC WOAs published a total of 26 302 articles between 2019 and 2020, all of which we included in our analyses. All data are publicly available on WeChat. Incorporating characteristics relevant to the COVID-19 pandemic, we developed a standardized questionnaire and established the coding norms to extract features based on the previous frame used by Zhang et al. . A frame includes nine characteristics composed of push time, release position, title type, article content, article type, communication skills, marketing elements, article length, and video length. Based on this coding instrument, trained professional interviewers collected data through an online survey. SKC, CXL, and MWW resolved uncertainties regarding the answer choice by discussion following a full assessment. Detailed variable definitions are shown in Table S1 in the . WeChat engagement behavior is defined as users reacting to an article, such as reading, loving, and re-sharing. Due to the low level of thumbs-up behavior, we only included two dependent variables – reading and re-sharing level. Reading was defined as “How many people have read the article?” and re-sharing as “Readers rebroadcast them by simply retweeting them to a public space on WeChat, others can read articles being shared by ‘Discover>Top Stories>Wow’”. We gained “reads” or “Wow” options by clicking on “Subscriptions-articles” at the bottom of each article on the WeChat homepage. Independent variables, article type, communication skills, and marketing elements do not contain mutually exclusive categories in an article, so we reclassified and coded databased on the most common combination (>85%). Because combinations of marketing elements are scattered, we converted characteristics into quantities. The different stages of the COVID-19 pandemic were defined according to the daily cumulative confirmed cases data in China between January 22 to December 31, 2020 . For outcome variables, we used the 75 th percentile as the cut-off point to categorize participants as high or low reading level and re-sharing level because the data were not normally distributed. We generated descriptive statistics for characteristics of WOA articles to determine the frequency of each coding item. We used a χ 2 test to determine the difference in categorical data and filter variables ( P < 0.05). With low reading and re-sharing level category as a reference, we used binary logistic regression analysis to study the association between article characteristics (categorical independent variables) and user engagement to obtain the odd ratios (ORs) and corresponding 95% confidence intervals (CIs). We considered P < 0.05 (two-sided) as statistically significant. Furthermore, we used significant variables in the logistic regression analysis as the final predictors to create a nomogram. We also used a receiver operator characteristic (ROC) curve to evaluate the discriminative performance of the nomogram model . We used bootstrapping to perform internal validation of the original data to predict the accuracy of the nomogram model. The sum of each variable’s total score can be used to estimate the probability of high level of user engagement behaviors. We performed all statistical analyses using SPSS (version 25), Python (version 3.10.4), and R (version 4.2.1). Characteristics of article features After a manual search of provincial CDC-related keywords via the platform’s public search function, we found 31 Chinese provincial CDCs had opened WOAs for the dissemination of health knowledge. We coded a total of 26 302 articles . Guangxi Province CDC had the highest proportion among the included articles, while the Xinjiang Province CDC had the lowest (n = 47, 0.2%). The most common code of release position, title type, article content, article type, communication skills, marketing elements, article length and video length were secondary push (53.4%), declarative sentence/ phrase (59.1%), content not related to COVID-19 (57.8%), text and pictures (60.6%), guidance/education/advice/appeal (68.7%), one marketing element (60.0%), <1000 words (61.6%), and no video (89.0%) . After the COVID-19 outbreak, the number of articles increased approximately 2-fold as compared to the previous year. According to push time, we analyzed dynamic changes of WOA articles and official COVID-19 case counts. The number of articles posted by WeChat and the progress of the COVID-19 pandemic are nearly synchronized . According to longitudinal trends, we defined March 1, 2020, as an inflection point toward normalization of the pandemic, after which the number of articles gradually decreased. Analysis of features affecting the user engagement behaviors during the COVID-19 pandemic Multivariable logistic regression showed that prior to the pandemic release position, title type, article content, article type, communication skills, marketing elements, article length and video length contributed significantly to explaining the level of engagement . Concerning release position, when compared with secondary push, those using main push were more likely to receive high-level reading (OR = 4.867; 95% CI = 4.132-5.732) and re-sharing level (OR = 3.131; 95% CI = 2.706-3.623). Articles that featured exclamation/emphasis in the title attracted higher levels of user engagement (reading level: OR = 2.133; 95% CI = 1.826-2.491, re-sharing level: OR = 1.495; 95% CI = 1.289-1.735), while combinations of the above sentences in title type generally had low re-sharing level. Content, including other infectious diseases, chronic diseases, food safety and nutrition, vaccination, environmental and occupational health, health education activities, and healthy lifestyle, was associated with higher reading and re-sharing levels ( P < 0.05) compared with other article contents. A combination of text, links, and pictures was associated with higher reading (OR = 3.530; 95% CI = 2.214-5.628) and re-sharing level (OR = 2.827; 95% CI = 1.791-4.462) compared with text alone. The greatest communication skills and marketing elements to promote the reading and re-sharing level were a combination of guidance/education/advice/appeal and negative emotional appeal (reading level: OR = 2.741; 95% CI = 2.205-3.408, re-sharing level: OR = 2.401; 95% CI = 1.950-2.956) and one marketing element (reading level: OR = 1.454; 95% CI = 1.258-1.681, re-sharing level: OR = 1.889; 95% CI = 1.643-2.172) compared to the reference, respectively. For article length, the greatest contribution to the reading and re-sharing level was 1000-1499 words (OR = 2.161; 95% CI = 1.858-2.513) compared with <1000 words, and closely followed by 1500-2000 words, while the least contribution was >2000 words. Articles with 1-149- second-, 150-300 second-, or >300-second-long videos were associated with a higher tendency of reading and re-sharing levels than no video. During the outbreak, release position, title type, and video length displayed a similar pattern with non-pandemic . However, pictures only (reading level: OR = 0.478; 95% CI = 0.326-0.699, re-sharing level: OR = 0.488; 95% CI = 0.333-0.713), and text and pictures (reading level: OR = 0.618; 95% CI = 0.462-0.827, re-sharing level: OR = 0.735; 95% CI = 0.551-0.980) were associated with a lower tendency of engagement behaviors compared to text alone. With the exception of a combination of guidance/education/advice/appeal and negative emotional appeal, positive emotional appeal also displayed a significant difference, receiving nearly 1.6 times as many likes as guidance/education/advice/appeal. Notably, posts that featured marketing elements (>1) (OR = 0.666, 95% CI = 0.484-0.917) or titles containing obvious COVID-19-related words (OR = 0.759; 95% CI = 0.642-0.897) attracted lower re-sharing levels. Additionally, articles with only 1000-1999 words were associated with higher reading and re-sharing levels. Nearly all articles from Chinese provincial CDCs during the outbreak were related to COVID-19 (>85%); consequently, we analyzed content characteristics related to COVID-19. The result showed that content about COVID-19 pandemic reports and guidance for public protection achieved the greatest contribution to reading (OR = 7.410; 95% CI = 4.771-11.509) and re-sharing levels (OR = 3.980; 95% CI = 2.663-5.949). During normalization, article features such as main push, a combination of exclamation/emphasis, positive emotional appeal, a combination of text, links, and pictures had the greatest contribution to reading and re-sharing levels compared to the control group . For article content, the public showed the greatest concern relating to COVID-19 pandemic reports and guidance for public protection (reading level: OR = 12.340; 95% CI = 9.357-16.274, re-sharing level: OR = 7.254; 95% CI = 5.554-9.473) and vaccination (reading level: OR = 7.323; 95% CI = 5.256-10.202, re-sharing level: OR = 4.850; 95% CI = 3.514-6.694). Content about national health policy, conferences, and other outlets were more likely to obtain low reading and re-sharing levels. Regarding marketing elements, a quantity greater than one attracted a high level of reading (OR = 1.274; 95% CI = 1.089-1.490). Articles containing 1500-2000 words were 1.801 and 1.610 times more likely to result in high-level reading and re-sharing than articles containing <1000 words. Predictive nomogram for the probability of high level of user engagement Based on the final logistic regression analysis, we constructed a nomogram to predict article features associated with user engagement during normalization for the COVID-19 pandemic . The top three of eight key factors involved in article features at the reading and re-sharing level were the release position, article content, and article type. The calibration curve was in general agreement with the ideal curve and AUC was 81.6 (95% CI = 80.8-82.4) for the reading level and 77.9 (95% CI = 77.0-78.8) for the re-sharing level, indicating that the prediction model had good discriminatory power and calibration (Figure S1 in the ). We can search the corresponding score for the point scale axis of each article feature to gain the probability of a high level of user engagement during normalization for the COVID-19 pandemic. After a manual search of provincial CDC-related keywords via the platform’s public search function, we found 31 Chinese provincial CDCs had opened WOAs for the dissemination of health knowledge. We coded a total of 26 302 articles . Guangxi Province CDC had the highest proportion among the included articles, while the Xinjiang Province CDC had the lowest (n = 47, 0.2%). The most common code of release position, title type, article content, article type, communication skills, marketing elements, article length and video length were secondary push (53.4%), declarative sentence/ phrase (59.1%), content not related to COVID-19 (57.8%), text and pictures (60.6%), guidance/education/advice/appeal (68.7%), one marketing element (60.0%), <1000 words (61.6%), and no video (89.0%) . After the COVID-19 outbreak, the number of articles increased approximately 2-fold as compared to the previous year. According to push time, we analyzed dynamic changes of WOA articles and official COVID-19 case counts. The number of articles posted by WeChat and the progress of the COVID-19 pandemic are nearly synchronized . According to longitudinal trends, we defined March 1, 2020, as an inflection point toward normalization of the pandemic, after which the number of articles gradually decreased. Multivariable logistic regression showed that prior to the pandemic release position, title type, article content, article type, communication skills, marketing elements, article length and video length contributed significantly to explaining the level of engagement . Concerning release position, when compared with secondary push, those using main push were more likely to receive high-level reading (OR = 4.867; 95% CI = 4.132-5.732) and re-sharing level (OR = 3.131; 95% CI = 2.706-3.623). Articles that featured exclamation/emphasis in the title attracted higher levels of user engagement (reading level: OR = 2.133; 95% CI = 1.826-2.491, re-sharing level: OR = 1.495; 95% CI = 1.289-1.735), while combinations of the above sentences in title type generally had low re-sharing level. Content, including other infectious diseases, chronic diseases, food safety and nutrition, vaccination, environmental and occupational health, health education activities, and healthy lifestyle, was associated with higher reading and re-sharing levels ( P < 0.05) compared with other article contents. A combination of text, links, and pictures was associated with higher reading (OR = 3.530; 95% CI = 2.214-5.628) and re-sharing level (OR = 2.827; 95% CI = 1.791-4.462) compared with text alone. The greatest communication skills and marketing elements to promote the reading and re-sharing level were a combination of guidance/education/advice/appeal and negative emotional appeal (reading level: OR = 2.741; 95% CI = 2.205-3.408, re-sharing level: OR = 2.401; 95% CI = 1.950-2.956) and one marketing element (reading level: OR = 1.454; 95% CI = 1.258-1.681, re-sharing level: OR = 1.889; 95% CI = 1.643-2.172) compared to the reference, respectively. For article length, the greatest contribution to the reading and re-sharing level was 1000-1499 words (OR = 2.161; 95% CI = 1.858-2.513) compared with <1000 words, and closely followed by 1500-2000 words, while the least contribution was >2000 words. Articles with 1-149- second-, 150-300 second-, or >300-second-long videos were associated with a higher tendency of reading and re-sharing levels than no video. During the outbreak, release position, title type, and video length displayed a similar pattern with non-pandemic . However, pictures only (reading level: OR = 0.478; 95% CI = 0.326-0.699, re-sharing level: OR = 0.488; 95% CI = 0.333-0.713), and text and pictures (reading level: OR = 0.618; 95% CI = 0.462-0.827, re-sharing level: OR = 0.735; 95% CI = 0.551-0.980) were associated with a lower tendency of engagement behaviors compared to text alone. With the exception of a combination of guidance/education/advice/appeal and negative emotional appeal, positive emotional appeal also displayed a significant difference, receiving nearly 1.6 times as many likes as guidance/education/advice/appeal. Notably, posts that featured marketing elements (>1) (OR = 0.666, 95% CI = 0.484-0.917) or titles containing obvious COVID-19-related words (OR = 0.759; 95% CI = 0.642-0.897) attracted lower re-sharing levels. Additionally, articles with only 1000-1999 words were associated with higher reading and re-sharing levels. Nearly all articles from Chinese provincial CDCs during the outbreak were related to COVID-19 (>85%); consequently, we analyzed content characteristics related to COVID-19. The result showed that content about COVID-19 pandemic reports and guidance for public protection achieved the greatest contribution to reading (OR = 7.410; 95% CI = 4.771-11.509) and re-sharing levels (OR = 3.980; 95% CI = 2.663-5.949). During normalization, article features such as main push, a combination of exclamation/emphasis, positive emotional appeal, a combination of text, links, and pictures had the greatest contribution to reading and re-sharing levels compared to the control group . For article content, the public showed the greatest concern relating to COVID-19 pandemic reports and guidance for public protection (reading level: OR = 12.340; 95% CI = 9.357-16.274, re-sharing level: OR = 7.254; 95% CI = 5.554-9.473) and vaccination (reading level: OR = 7.323; 95% CI = 5.256-10.202, re-sharing level: OR = 4.850; 95% CI = 3.514-6.694). Content about national health policy, conferences, and other outlets were more likely to obtain low reading and re-sharing levels. Regarding marketing elements, a quantity greater than one attracted a high level of reading (OR = 1.274; 95% CI = 1.089-1.490). Articles containing 1500-2000 words were 1.801 and 1.610 times more likely to result in high-level reading and re-sharing than articles containing <1000 words. Based on the final logistic regression analysis, we constructed a nomogram to predict article features associated with user engagement during normalization for the COVID-19 pandemic . The top three of eight key factors involved in article features at the reading and re-sharing level were the release position, article content, and article type. The calibration curve was in general agreement with the ideal curve and AUC was 81.6 (95% CI = 80.8-82.4) for the reading level and 77.9 (95% CI = 77.0-78.8) for the re-sharing level, indicating that the prediction model had good discriminatory power and calibration (Figure S1 in the ). We can search the corresponding score for the point scale axis of each article feature to gain the probability of a high level of user engagement during normalization for the COVID-19 pandemic. To the best of our knowledge, this is the first study to analyze Chinese provincial CDCs for WOAs article information generated at different pandemic stages and give insight into how those features have evolved and maximized user engagement during the COVID-19 pandemic. We found different feature patterns of articles at different pandemic stages and can thus provide valuable recommendations for government health agencies to use the best patterns to improve user online engagement to deal with public health emergencies in the future. Notably, article content, article type, and release position were the most prominent feature affecting article reading and re-sharing, indicating that the predominance of these features would be more appealing to readers. We found that article content took priority over any other features during the COVID-19 pandemic. Significant topics, especially other infectious diseases, vaccinations, and healthy lifestyles had more positive effects on the number of readings and re-sharing than other topics, in line with a previous study . Moreover, qualitative analyses and classifications of social media article content can provide important information for communications during a pandemic . An analysis of Sina Weibo showed that domestic pandemics, quarantines, and investigations attracted more attention during the COVID-19 pandemic . We also discovered that content including pandemic reports and guidance for public protection at the same time was most appealing to readers after the pandemic occurred. Additionally, vaccination has always been a priority at different periods, suggesting the importance of increasing content that the public is concerned about in the further. Other than article content, another characteristic that significantly influenced the article's diffusion was article type. A combination of text, links and pictures has always been a priority over only text at different periods, possibly due to the effect of links. There is more controversy about the impact on links at the time of an emerging infectious disease . For example, Xie et al. received fewer re-shares during the COVID-19 pandemic, but Ngai et al. reported a positive effect on sharing. Additionally, we found that pictures only or a combination of text and pictures during the outbreak period had a more negative effect on user engagement than only text, but other studies argued that articles with pictures have been re-shared with higher frequency . However, emerging reports regarding the pandemic outbreak confirmed the positive effect of text-only articles . This might be due to citizens paying more attention to an article’s textual content, rather than pictures in the face of public health emergencies , or to more pictures causing unnecessary consumption . This is similar to the result from our study which showed that marketing elements, such as persons of authority or information sources, did not have a positive impact on user’ information behavior during a pandemic outbreak, suggesting that the association between article richness and user engagement should be further defined. The textual content has a positive impact on user' decision-making about article-reading and re-sharing behavior, but is inadequate for the period of the COVID-19 pandemic article’s rich elements should not be simply considered “the higher the better” and users might pay more attention to the text content itself during a pandemic. Release position affected users’ information behavior concerning liking and re-sharing an article at any time, because enlarged cover pictures generated more traffic, suggesting important information should be put in the main push position. Attractive titles can engage more of the public and our results also suggest the importance of an exclamation/emphasis title. We reported that titles containing obvious COVID-19-related words had a negative impact on users’ information behavior in terms of re-sharing an article during the COVID-19 outbreak, which was inconsistent with reports from the Guangzhou CDC . This might be due to most articles during the outbreak being related to the COVID-19 pandemic and COVID-19-related titles not generating public attention. For communication skills, before the pandemic occurred, a combination of guidance/education/advice/appeal and negative emotional appeal articles had a greater positive effect on user engagement than guidance/education/advice/appeal. As time progressed, emotions became more diverse . During the outbreak, a combination of guidance/education/advice/appeal and negative emotions was most likely to promote citizen engagement, followed by positive emotional appeal. In agreement with reports on article content related to the pandemic , cancer , and so on, negative emotions lead to increase engagement. This finding might be due to articles with negative emotions article contents spreading faster, especially when facing to unexpected events . When the pandemic gradually normalized, positive emotional articles also impacted public engagement behaviors online. Our result suggested that articles with emotions promoted citizen engagement behaviors, especially during the COVID-19 crisis. WeChat provides a good channel for long articles sharing. Our results showed that articles with 1000-1499 and 1500-2000 words had a greater positive effect on user engagement than articles with 0-1000 words during any period, concluding that article length significantly affected more article’s diffusion . Videos also had a positive effect on user engagement during any period, and optimal video length for user engagement was 150-300 seconds during the outbreak. This is because long texts or videos are likely to provide richer information than shorter ones . However, we focused predominantly on the traditional dimension of citizen engagement, comprehensive engagement metrics should be explored to represent user engagement. Our result also might not apply to other social media platforms in other countries to respond to public health emergencies. We found that the determinants of users' behavior, including release position, title type, article content, article type, communicative skills, marketing elements, article length, and video length differed between pandemic stages. For example, when infectious diseases outbreak, article features with main push, exclamation/emphasis of title type, content focus on pandemic report and guidance for public protection, article type of text, links and pictures, guidance/education/advice/appeal and negative emotional appeal, 1000-1999 words, 150-300 seconds video length will attract more attention. We should encourage health sector to greater use of social media to reduce the burden of care on health with a focus on educating the public. These findings provide reference for government health agencies to improve the features of health information in the WeChat platform and engage their target audience during public health threats, allowing for a better grasp of information relevant to decreasing the threat of the pandemic . We found that determinants of users' behavior including release position, title type, article content, article type, communicative skills, marketing elements, article length and video length differed between pandemic stages. Notably, article content, article type, and release position were the most prominent feature affecting article dissemination, which could help public health agencies in choosing the best patterns to improve online user engagement when dealing with future public health emergencies. Online Supplementary Document
|
Clinical Circulating Tumor DNA Testing for Precision Oncology
|
0748d718-d0eb-49c7-a668-8db7d780bb14
|
10101787
|
Internal Medicine[mh]
|
As we currently live in an era of information and advanced genomics, we should maintain our focus on how we respond to and process the overwhelming amount of information we encounter. For example, Mandel and Metais first described the presence of nucleic acids in human blood in 1948, but several decades passed before attention was paid to the vast amount of information supplied by nucleic acids in the blood. However, since the discovery of mutant RAS gene fragments in the blood of cancer patients in 1994 and the detection of microsatellite DNA changes in the serum of cancer patients in 1996 , the information contained within the nucleic acids in the blood has gradually gained attention. Blood contains cellular components and numerous biological substances, such as extracellular vesicles, proteins, and nucleic acids, including mRNAs, miRNAs, and cell-free DNA (cfDNA). cfDNA refers to any non-encapsulated DNA within the bloodstream originating from various cell types. The portion of the cfDNA in the blood of cancer patients released from tumor cells via apoptosis, necrosis, or active release , is commonly referred to as the circulating tumor DNA (ctDNA). The ctDNA has gained increasing attention since 2010 because of the potential to detect early cancer metastases through novel, sensitive laboratory methods that cannot be detected by high-resolution imaging techniques . The BRACAnalysis (Myriad Genetic Laboratories, Salt Lake City, UT) was the first Food and Drug Administration (FDA) approved companion diagnostic test using ovarian cancer patient’s blood specimens with the development of a gene mutation treatment. Expectations arose that ctDNA could lead to drug treatments for cancer patients. Since then, the number of tests and studies related to ctDNA has exploded exponentially . Nucleic acids in the blood are heterogenous depending on their origin. ctDNA analysis can provide more comprehensive information than a conventional tissue biopsy, which has the spatial limitation inherent in sampling due to tumor tissue heterogeneity. It is estimated that up to 3.3% of tumor DNA enters the blood daily from 100 g of tumor tissue, equivalent to 3×10 10 tumor cells . On average, the size of the ctDNA varies from small fragments of 70–200 base pairs to large fragments of up to 21 kb . It is important to note the relatively short half-life of ctDNA in blood circulation, ranging from 16 minutes to 2.5 hours . Although many tumor-specific abnormalities (e.g., mutations in tumor or tumor suppressor genes, changes in DNA integrity , abnormal gene methylation, changes in microsatellite , mitochondrial DNA loading levels, and changes in chromosomal genomes ) can be detected using ctDNA, a number of obstacles exist in the implementation of ctDNA for screening and diagnosis. First, normal hematopoietic cells and other nucleic acids of non-tumor origin also contribute to the ctDNA in the blood and cause false positives in ctDNA assays in cancer patients . Not all somatic mutations detected in the ctDNA analyses are of cancer origin; clonal expansion of somatic variants can be observed in healthy individuals and may represent clonal hematopoiesis of indeterminate potential (CHIP). CHIP frequency increases with age, with only 1% of people under the age of 50 but > 10% over the age of 65 exhibiting CHIP . These abnormalities commonly occur in the DNMT3A , TET2 , and ASXL1 genes , but have also been reported in other genes such as TP53 , JAK2 , SF3B1 , GNB1 , PPM1D , GNAS , and BCORL1 . Simultaneous occurrence of CHIP and tumor-derived gene mutations that have abnormalities in these genes may cause difficulties interpreting the ctDNA assays. The second issue is the low concentration of ctDNA (1–10 ng/mL in asymptomatic individuals) . Depending on the concentration of ctDNA, a false negative result is possible; therefore, the sample volume is an important factor affecting the results. Third, the variant allele frequency (VAF) of ctDNA is usually much lower, often below 1%, and can be affected by factors such as cancer type, stage, and clearance rate . Any interpretation of the results requires careful decisions regarding the threshold of allele frequencies of the detected variants, as these are critical aspects. Fourth, there is a lack of consensus on how ctDNA detection should be performed, from the extraction stage to the final in silico variant analysis stage. Even the nomenclatures related to ctDNA lack a proper consensus . Due to the rapid incremental clinical use of ctDNA testing, the unmet demand for a proper consensus on ctDNA-related issues remains . This review describes the currently-available ctDNA assays based on the different methodologies, ranging from the traditional methods to more recent advanced molecular technologies. We focus on the unmet need for clinical validation of ctDNA testing by reviewing the validation and approval processes of the FDA and European Commission in vitro Diagnostic Medical Device (CE-IVD), among others. This review addresses frequently raised questions regarding the clinical application of ctDNA assays, summarizes the current status of approved and validated ctDNA assays, and the future direction of ctDNA testing.
Before introducing the ctDNA test, the terminology and definitions of ctDNA must be clarified. Bronkhorst et al. proposed a nomenclature system for three highly investigated diagnostic areas based on the biological compartment in which the cfDNA is distributed (depending on its presence in circulation) and the origin . cfDNA is highly heterologous, and a broader concept is needed that covers both nuclear and microbial DNA. Nuclear DNA includes mitochondrial DNA, and microbial DNA encompasses both microbial and viral DNA, not of human origin. Part of the nuclear DNA in the plasma of cancer patients is ctDNA. ctDNA usually refers to all types of tumor-derived DNA in the circulating blood, as discussed in this review. DNA abnormalities occur by different parts, and each has different features. These features of the ctDNA have different potential clinical implications . Genomic aberrations of somatic origin detectable in the ctDNA include mutations, chromosomal rearrangements, and copy number changes. An additional features characteristic of ctDNA is specific epigenetic aberrations such as methylation patterns or different DNA fragment lengths . Although ctDNA tests vary in their genomic features and coverage of the genes of interest, the basic principles of the test remain the same. Two categories exist; targeted approaches that test for a small number of known mutations, and untargeted approaches that broadly test for unknown targets. A targeted approach includes real-time polymerase chain reaction (RT-PCR), digital PCR (dPCR), and beads, emulsion, amplification, and magnetics (BEAMing) technology, whereas the broader approach would include high-throughput sequencing methods based on next-generation sequencing (NGS), whole exome sequencing (WES), whole genome sequencing (WGS), and mass-spectrometry-based detection of PCR amplicons among others. 1. ctDNA detection methods 1) RT-PCR RT-PCR is widely used for variant screening because it is relatively inexpensive and fast . The variants are detected via the binding of complementary sequences using fluorescent-labeled sequence-specific probes, and the fluorescence intensity is related to the amount of amplified product. The sensitivity of RT-PCR is approximately 10%, which is lower than that of other test methods . Cold amplification at a lower denaturation temperature PCR (COLD-PCR) is a variant assay that improves the RT-PCR sensitivity. COLD-PCR concentrates mutated DNA sequences in preference to the wild type using a lower-temperature denaturation step during the cycling protocol. The denaturation temperature for a given sequence is adjusted within ±0.3°C to allow for selective denaturation and amplification of mutated sequences, while double-stranded wild-type sequences are amplified less. This assay can enrich the mutant sequences, improving the sensitivity to detect the mutant allele frequency (MAF) to approximately 0.1% . The PCR-based method has the advantage of high sensitivity and cost-effectiveness but is limited as only known variants can be selected with limiting input and speed. 2) Digital PCR dPCR shares the same reaction principle as RT-PCR, except that the samples are dispersed into arrays or droplets, resulting in thousands of parallel PCR reactions. The dPCR can quantify a low fraction of variants against a high background wild-type cfDNA using a single or few DNA templates in an array/droplet and has 0.1% sensitivity . The dPCR method can be applied to cancer personalized profiling by deep sequencing (CAPP-Seq) in combination with molecular barcoding technologies that improve sensitivity by reducing the background sequencing error . These two methods improve the sensitivity of CAPP-Seq up to three-fold and, when combined with molecular barcoding, yield approximately 15-fold improvements . cfDNA enrichment is conducted by a two-step PCR procedure during the sample preparation process. The first PCR amplifies the mutational hotspot regions of several genes in a single tube. The second PCR is a nested PCR with unique barcoded primers for sample labeling. The final PCR products are pooled and partitioned for sequencing. The advanced dPCR assay (BEAMing) is a highly sensitive approach with a detection rate of 0.02% . This approach consists of four principal components: beads, emulsion, amplification, and magnetics. BEAMing combines dPCR with magnetic bead and flow cytometry . In BEAMing, the primer binds to the magnetic beads using a reaction that forms a biotin-streptavidin complex. Less than one template molecule and less than one bead are contained within the microemulsion, and the PCR is performed within each droplet. At the end of the PCR process, the beads are magnetically purified. After denaturation, the beads are incubated with oligonucleotides to distinguish between different templates. The bound hybridization probe is then labeled with a fluorescently labeled antibody. Finally, the amplified products are counted as fluorescent beads by flow cytometry. However, the BEAMing method is impractical for routine clinical use due to its workflow complexity and high cost . 3) Mass spectrometry The mass spectrometry-based method combines the matrix-assisted laser desorption/ionization time-of-flight mass spectrometry with a conventional multiplex PCR. An example of this method is UltraSEEK (Agena Bioscience, San Diego, CA). UltraSEEK consists of two-step PCR for amplification and mass spectrometry for detection. The two-step PCR step consists of a multiplex PCR followed by a mutation-specific single-base extension reaction. The extension reaction uses a single mutation-specific chain terminator labeled with a moiety for solid phase capture. Captured, washed, and eluted products are examined for mass, and mutational genotypes are identified and characterized using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry . UltraSEEK has the advantage of multiplex detection of mutant sequences simultaneously, and has a MAF of 0.1% . 4) Next-generation sequencing NGS, also known as massively parallel sequencing technology, can characterize cancer at the genomic, transcriptomic, and epigenetic levels. NGS is a highly sensitive assay that can detect mutations in MAF of < 1% using the latest platforms . NGS can analyze several million short DNA sequences in parallel and conduct sequence alignment or de novo sequence assembly to the reference genome . Depending on the panel configuration, NGS panels can be targeted to analyze known variants or untargeted to screen unknown variants. Target panels were preferred due to their high sensitivity and low cost but are limited to point mutations and indel analysis. Several NGS methods can be applied to target panels with adjustable sensitivity, including tagged amplicon deep sequencing, the safe sequencing system, and CAPP-Seq. On the other hand, WGS or WES using untargeted panels allow detection of unknown DNA variants throughout the entire genome (or exome). Different genome-wide sequencing methods have been proposed for different variation types, such as personalized analysis of rearranged ends, digital karyotyping, and Fast Aneuploidy Screening Test-Sequencing System . However, genome-wide sequencing requires a large sample, making its application for ctDNA difficult due to the low concentrations of ctDNA in samples. There have been attempts to analyze DNA fragmentation differences. There is a marked difference in fragment length size between ctDNA and normal cf DNA. The fragment length of ctDNA is consistently shorter than that of normal cfDNA . Besides, ctDNA with a low MAF (< 0.6%) is associated with a longer ctDNA fragment length when compared to normal cfDNAs . Moreover, most cancers of different origins showed fragmentation profiles of varying lengths . The characteristic DNA fragmentation provides a proof-of-principle approach applicable to screening, early detection, and monitoring of various cancer types. NGS application has been extended to microsatellite inst-ability (MSI) detection . Loss of DNA mismatch repair (MMR) activity leads to an accumulation of mutations that could otherwise be corrected by MMR genes. A deficiency in MMR activity is often caused by germline mutations or aberrant methylation. The MSI phenotype of a deficiency in MMR activity refers to the shortening or lengthening of tandem DNA repeats in coding and noncoding regions throughout the genome. Tumors with at least 30% to 40% of unstable microsatellite loci, termed microsatellite instability high (MSI-H) , reportedly have a better prognosis than MSS tumors and tumors with low MSI. MSI has been documented in various cancer types, including colon, endometrium, and stomach cancers . The FDA has approved Pembrolizumab to treat MSI-H cancer regardless of the tumor type or site . NGS-based methods utilize various MSI detection algorithms such as MSIsensor , mSINGS , MANTIS , and bMSISEA , which have demonstrated concordance rates ranging from 92.3% to 100% with the PCR-based method. The application of NGS can reliably detect the MSI status with a ctDNA fraction up to 0.4% . 5) Methylation analysis Epigenetic information such as methylation is more specific to the tissue of origin than genetic mutations . Changes in DNA methylation patterns occur early in tumor development and have been reported to help early screening for cancers of unknown origin . Methylation analysis is not routinely or commonly used to detect ctDNA, but it can be partially applied to cancer patients. The method can be broadly divided depending on its application to the candidate gene. The Grail’s technology applied DNA methylation patterns to differentiate between cancer cell types or tissue origins . Most cfDNA methylation analysis methods applied a candidate gene approach due to the low analytical cost and the efficiency of using pre-established epigenetic biomarkers . Bisulfite treatment-based assays distinguish cytosine methylation and are generally the preferred ctDNA methylation detection method . The analytical principle is based on treating the DNA with bisulfite to convert unmethylated cytosine residues to uracil. Two types of ctDNA methylation analysis exist; PCR-based methods that apply specific primers or melting temperatures and sequence-based methods such as direct sequencing or pyrosequencing. However, the accuracy of bisulfite pyrosequencing is only maintained up to 5% . Methylation-specific PCR (MSP) can distinguish DNA sequences by sequence-specific PCR primers after bisulfite conversion . The methylation-sensitive high-resolution melting (MS-HRM) protocol is based on comparing the melting profiles of the PCR products from unknown samples with profiles for specific PCR products derived from methylated and unmethylated control DNAs . The protocol consists of PCR amplification of bisulfite-modified DNA with primers and subsequent high-resolution melting analysis of the PCR product. MSP or MS-HRM can accurately detect about 0.1% of methylated DNA . 6) Hybrid sequencing (NanoString) The nCounter Technology (NanoString Technologies, Seattle, WA) is a novel technology developed to screen clinically-relevant ALK , ROS1 , and RET fusion genes in lung cancer tissue samples. NanoString is applicable to RNA, miRNA, or protein and, more recently, to ctDNA . Target ctDNA is directly tagged with capture and reporter probes that are specific to the target variant of interest, creating a unique target-probe complex. The probes include a fluorescent reporter and a secondary biotinylated capture probe that allows immobilization onto the cartridge surface. The target-probe complex is immobilized and aligned on the imaging surface. The labeled barcode of the complex is then directly counted by an automated fluorescence microscope . 2. International efforts for advanced precision medicine in ctDNA analysis The availability of new ctDNA testing methods and continuous scientific advances has resulted in several new problems. The factors affecting ctDNA testing outcomes are present from the sample collection phase to the final reporting phase. The American Society of Clinical Oncology (ASCO) and the College of American Pathologists (CAP) reviewed the framework for future research into clinical ctDNA tests in 2018 . This article categorizes the key findings that affect ctDNA testing for oncology patients into preanalytical variables for ctDNA specimens, analytical validity, interpretation and reporting, and clinical validity and utility of each test. Plasma is the most suitable sample recommended for ctDNA testing , as is the use of specific types of sample collection tubes such as cell-stabilizing tubes (Cell-Free DNABCT [STRECK tubes] and PAXgene Blood DNA tubes [Qiagen]), or conventional EDTA anticoagulant tubes . Leukocyte stabilization tubes can extend the preprocessing window to 48 hours after collection, but EDTA anticoagulant tubes require processing within 6 hours. However, few studies have examined the preanalytical variables affecting ctDNA testing, and guidelines are needed to validate their clinical utility. Considering the variations in the many factors and different types of ctDNA assays, based on different methods, the validity of each analysis must be comparable. The current clinical ctDNA analyses require a clear assessment of the validity of the individual analyses. To increase the precision of ctDNA assays, best practices, protocols, and quality metrics for NGS-based ctDNA analyses must be developed. The Sequencing Quality Control Phase 2 (SEQC2) consortium organized by the FDA is an international group of members from academia, government, and industry ( https://www.fda.gov/science-research/bioinformatics-tools/microarraysequencing-quality-control-maqcseqc#MAQC_IV ). The SEQC2 Oncopanel Sequencing Working Group developed a translational scientific infrastructure to be applied for practices in precision oncology . The Oncopanel Sequencing Working Group evaluated panels/assays, genomic regions, coverage, VAF ranges, and bioinformatics pipelines, using self-constructed reference samples. This study on the analytical performance evaluation of oncopanels/assays for small variant detection includes: (1) comprehensive solid tumor oncopanel examination , (2) liquid biopsy testing , (3) testing involving formalin-fixed paraffin-embedded material , and (4) testing involving spike-in materials . A major finding from the SEQC2 liquid biopsy proficiency testing study is that all assays could detect mutations with high sensitivity, precision, and reproducibility for those above the 0.5% VAF threshold. The degree of DNA input material impacted the test sensitivity, requiring higher input for improved sensitivity and reproducibility for variants with a VAF below 0.5%. Advanced NGS-based assays for precision oncology are in high demand, and recently approved ctDNA assays are to be identified. Establishing a proper validation scheme would support the FDA’s regulatory and scientific endeavors. 3. FDA-approved ctDNA assay We searched for FDA-approved assays in the FDA database ( https://www.accessdata.fda.gov/scripts/cdrh/devicesatfda/index.cfm ) using the following keywords: circulating tumor DNA, ctDNA, cell-free DNA, circulating cell-free DNA, cfDNA, liquid biopsy, and plasma and DNA. The search results were compared to the annual report of medical devices cleared or approved on FDA lists published between 2013 and 2022 for confirmation, and assays related to ctDNA were selected. We identified three in vitro diagnostic devices (Epi ProColon, Cobas EGFR Mutation Test, and therascreen PIK3CA RGQ PCR Kit) and two specialized laboratory services (Guardant360 CDx, and FoundationOne Liquid CDx) . FoundationOne Liquid CDx was approved as a companion diagnostic on October 26 and November 6, 2020. The approved companion diagnostic indications are (1) to identify mutations in BRCA1 and BRCA2 genes in patients with ovarian cancer eligible for treatment with rucaparib (RUBRACA, Clovis Oncology, Inc.), (2) to identify ALK rearrangements in patients with non–small cell lung cancer eligible for treatment with alectinib (ALECENSA, Genentech USA Inc.), (3) to identify mutations in the PIK3CA gene in patients with breast cancer eligible for treatment with alpelisib (PIQRAY, Novartis Pharmaceutical Corporation), and (4) to identify mutations in the BRCA1 , BRCA2 , and ATM genes in patients with metastatic castration-resistant prostate cancer eligible for treatment with olaparib (LYNPARZA, AstraZeneca Pharmaceuticals LP) . The NGS-based ctDNA tests related to companion diagnostics, such as FoundationOne Liquid CDx and Guardant360 CDx are transitioning to specialized laboratory services. FDA-approved tests require evaluations of their analytical performance. Recent laboratory-based tests have undergone extensive evaluation testing using large sample numbers for advanced assay interpretation and reporting, clinical validation, and utility . Considering the cost of the tests, the number of tests performed for evaluation is prohibitive for small laboratories. Specialized laboratories use their own processes to provide users with reports. Therefore, testing is changing from a complex assay performed at individual laboratories to a more specialized service where each specialized laboratory be devised its own analysis processes, and the type of inspection changes to a laboratory service. 4. CE-marked ctDNA assay In May 2017, the Conformité Européenne (CE) declared the strengthening of in vitro Diagnostic Regulation and Medical Device Regulation regulations. The transition was completed in May 2022, following a 5-year transition period. The CE announced a new database search service called the European Database on Medical Devices (EUDAMED), which will consist of six modules: actor registration, unique device ID and device registration, notification authority and certificate, clinical and performance research, and alert and market monitoring. The EUDAMED database was scheduled for release in July 2022 but has been postponed to the third quarter of 2024 due to a delay in the module development process. We had difficulty performing a systematic search for medical devices or in vitro diagnostics with the CE mark. Therefore, we searched for recently published papers that mentioned CE-marked products . 5. Marketplace of ctDNA test in Republic of Korea In Korea, the marketplace of ctDNA test has recently expanded as the In Vitro Diagnostic Medical Devices Act was promulgated in April 2019. When ctDNA test was searched in the medical device database of the Korea Ministry of Food and Drug Safety ( https://udiportal.mfds.go.kr/search/data/P02_01#list ), a total of seven domestic tests were identified. The Smart Biopsy EML4-ALK Detection Kit (CytoGen, Seoul, Korea) was first nationally accredited on April 20, 2016, and the following tests have been nationally accredited in sequence; ADPS EGFR Mutation Test Kit V1 (GENECAST, Seoul, Korea), Droplex KRAS Mutation Test v2 (Gencurix, Seoul, Korea), Droplex PIK3CA Mutation Test (Gencurix), PANAMutyper R EGFR V2 (PANAGENE, Daejeon, Korea), AlphaLiquid 100 (IMBDX, Seoul, Korea), and LiquidSCAN (GENINUS, Seoul, Korea). Most tests are PCR-based assay, such as RT-PCR and dPCR, but AlphaLiquid 100 (IMBDX) and LiquidSCAN (GENINUS) are NGS-based assay.
1) RT-PCR RT-PCR is widely used for variant screening because it is relatively inexpensive and fast . The variants are detected via the binding of complementary sequences using fluorescent-labeled sequence-specific probes, and the fluorescence intensity is related to the amount of amplified product. The sensitivity of RT-PCR is approximately 10%, which is lower than that of other test methods . Cold amplification at a lower denaturation temperature PCR (COLD-PCR) is a variant assay that improves the RT-PCR sensitivity. COLD-PCR concentrates mutated DNA sequences in preference to the wild type using a lower-temperature denaturation step during the cycling protocol. The denaturation temperature for a given sequence is adjusted within ±0.3°C to allow for selective denaturation and amplification of mutated sequences, while double-stranded wild-type sequences are amplified less. This assay can enrich the mutant sequences, improving the sensitivity to detect the mutant allele frequency (MAF) to approximately 0.1% . The PCR-based method has the advantage of high sensitivity and cost-effectiveness but is limited as only known variants can be selected with limiting input and speed. 2) Digital PCR dPCR shares the same reaction principle as RT-PCR, except that the samples are dispersed into arrays or droplets, resulting in thousands of parallel PCR reactions. The dPCR can quantify a low fraction of variants against a high background wild-type cfDNA using a single or few DNA templates in an array/droplet and has 0.1% sensitivity . The dPCR method can be applied to cancer personalized profiling by deep sequencing (CAPP-Seq) in combination with molecular barcoding technologies that improve sensitivity by reducing the background sequencing error . These two methods improve the sensitivity of CAPP-Seq up to three-fold and, when combined with molecular barcoding, yield approximately 15-fold improvements . cfDNA enrichment is conducted by a two-step PCR procedure during the sample preparation process. The first PCR amplifies the mutational hotspot regions of several genes in a single tube. The second PCR is a nested PCR with unique barcoded primers for sample labeling. The final PCR products are pooled and partitioned for sequencing. The advanced dPCR assay (BEAMing) is a highly sensitive approach with a detection rate of 0.02% . This approach consists of four principal components: beads, emulsion, amplification, and magnetics. BEAMing combines dPCR with magnetic bead and flow cytometry . In BEAMing, the primer binds to the magnetic beads using a reaction that forms a biotin-streptavidin complex. Less than one template molecule and less than one bead are contained within the microemulsion, and the PCR is performed within each droplet. At the end of the PCR process, the beads are magnetically purified. After denaturation, the beads are incubated with oligonucleotides to distinguish between different templates. The bound hybridization probe is then labeled with a fluorescently labeled antibody. Finally, the amplified products are counted as fluorescent beads by flow cytometry. However, the BEAMing method is impractical for routine clinical use due to its workflow complexity and high cost . 3) Mass spectrometry The mass spectrometry-based method combines the matrix-assisted laser desorption/ionization time-of-flight mass spectrometry with a conventional multiplex PCR. An example of this method is UltraSEEK (Agena Bioscience, San Diego, CA). UltraSEEK consists of two-step PCR for amplification and mass spectrometry for detection. The two-step PCR step consists of a multiplex PCR followed by a mutation-specific single-base extension reaction. The extension reaction uses a single mutation-specific chain terminator labeled with a moiety for solid phase capture. Captured, washed, and eluted products are examined for mass, and mutational genotypes are identified and characterized using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry . UltraSEEK has the advantage of multiplex detection of mutant sequences simultaneously, and has a MAF of 0.1% . 4) Next-generation sequencing NGS, also known as massively parallel sequencing technology, can characterize cancer at the genomic, transcriptomic, and epigenetic levels. NGS is a highly sensitive assay that can detect mutations in MAF of < 1% using the latest platforms . NGS can analyze several million short DNA sequences in parallel and conduct sequence alignment or de novo sequence assembly to the reference genome . Depending on the panel configuration, NGS panels can be targeted to analyze known variants or untargeted to screen unknown variants. Target panels were preferred due to their high sensitivity and low cost but are limited to point mutations and indel analysis. Several NGS methods can be applied to target panels with adjustable sensitivity, including tagged amplicon deep sequencing, the safe sequencing system, and CAPP-Seq. On the other hand, WGS or WES using untargeted panels allow detection of unknown DNA variants throughout the entire genome (or exome). Different genome-wide sequencing methods have been proposed for different variation types, such as personalized analysis of rearranged ends, digital karyotyping, and Fast Aneuploidy Screening Test-Sequencing System . However, genome-wide sequencing requires a large sample, making its application for ctDNA difficult due to the low concentrations of ctDNA in samples. There have been attempts to analyze DNA fragmentation differences. There is a marked difference in fragment length size between ctDNA and normal cf DNA. The fragment length of ctDNA is consistently shorter than that of normal cfDNA . Besides, ctDNA with a low MAF (< 0.6%) is associated with a longer ctDNA fragment length when compared to normal cfDNAs . Moreover, most cancers of different origins showed fragmentation profiles of varying lengths . The characteristic DNA fragmentation provides a proof-of-principle approach applicable to screening, early detection, and monitoring of various cancer types. NGS application has been extended to microsatellite inst-ability (MSI) detection . Loss of DNA mismatch repair (MMR) activity leads to an accumulation of mutations that could otherwise be corrected by MMR genes. A deficiency in MMR activity is often caused by germline mutations or aberrant methylation. The MSI phenotype of a deficiency in MMR activity refers to the shortening or lengthening of tandem DNA repeats in coding and noncoding regions throughout the genome. Tumors with at least 30% to 40% of unstable microsatellite loci, termed microsatellite instability high (MSI-H) , reportedly have a better prognosis than MSS tumors and tumors with low MSI. MSI has been documented in various cancer types, including colon, endometrium, and stomach cancers . The FDA has approved Pembrolizumab to treat MSI-H cancer regardless of the tumor type or site . NGS-based methods utilize various MSI detection algorithms such as MSIsensor , mSINGS , MANTIS , and bMSISEA , which have demonstrated concordance rates ranging from 92.3% to 100% with the PCR-based method. The application of NGS can reliably detect the MSI status with a ctDNA fraction up to 0.4% . 5) Methylation analysis Epigenetic information such as methylation is more specific to the tissue of origin than genetic mutations . Changes in DNA methylation patterns occur early in tumor development and have been reported to help early screening for cancers of unknown origin . Methylation analysis is not routinely or commonly used to detect ctDNA, but it can be partially applied to cancer patients. The method can be broadly divided depending on its application to the candidate gene. The Grail’s technology applied DNA methylation patterns to differentiate between cancer cell types or tissue origins . Most cfDNA methylation analysis methods applied a candidate gene approach due to the low analytical cost and the efficiency of using pre-established epigenetic biomarkers . Bisulfite treatment-based assays distinguish cytosine methylation and are generally the preferred ctDNA methylation detection method . The analytical principle is based on treating the DNA with bisulfite to convert unmethylated cytosine residues to uracil. Two types of ctDNA methylation analysis exist; PCR-based methods that apply specific primers or melting temperatures and sequence-based methods such as direct sequencing or pyrosequencing. However, the accuracy of bisulfite pyrosequencing is only maintained up to 5% . Methylation-specific PCR (MSP) can distinguish DNA sequences by sequence-specific PCR primers after bisulfite conversion . The methylation-sensitive high-resolution melting (MS-HRM) protocol is based on comparing the melting profiles of the PCR products from unknown samples with profiles for specific PCR products derived from methylated and unmethylated control DNAs . The protocol consists of PCR amplification of bisulfite-modified DNA with primers and subsequent high-resolution melting analysis of the PCR product. MSP or MS-HRM can accurately detect about 0.1% of methylated DNA . 6) Hybrid sequencing (NanoString) The nCounter Technology (NanoString Technologies, Seattle, WA) is a novel technology developed to screen clinically-relevant ALK , ROS1 , and RET fusion genes in lung cancer tissue samples. NanoString is applicable to RNA, miRNA, or protein and, more recently, to ctDNA . Target ctDNA is directly tagged with capture and reporter probes that are specific to the target variant of interest, creating a unique target-probe complex. The probes include a fluorescent reporter and a secondary biotinylated capture probe that allows immobilization onto the cartridge surface. The target-probe complex is immobilized and aligned on the imaging surface. The labeled barcode of the complex is then directly counted by an automated fluorescence microscope .
RT-PCR is widely used for variant screening because it is relatively inexpensive and fast . The variants are detected via the binding of complementary sequences using fluorescent-labeled sequence-specific probes, and the fluorescence intensity is related to the amount of amplified product. The sensitivity of RT-PCR is approximately 10%, which is lower than that of other test methods . Cold amplification at a lower denaturation temperature PCR (COLD-PCR) is a variant assay that improves the RT-PCR sensitivity. COLD-PCR concentrates mutated DNA sequences in preference to the wild type using a lower-temperature denaturation step during the cycling protocol. The denaturation temperature for a given sequence is adjusted within ±0.3°C to allow for selective denaturation and amplification of mutated sequences, while double-stranded wild-type sequences are amplified less. This assay can enrich the mutant sequences, improving the sensitivity to detect the mutant allele frequency (MAF) to approximately 0.1% . The PCR-based method has the advantage of high sensitivity and cost-effectiveness but is limited as only known variants can be selected with limiting input and speed.
dPCR shares the same reaction principle as RT-PCR, except that the samples are dispersed into arrays or droplets, resulting in thousands of parallel PCR reactions. The dPCR can quantify a low fraction of variants against a high background wild-type cfDNA using a single or few DNA templates in an array/droplet and has 0.1% sensitivity . The dPCR method can be applied to cancer personalized profiling by deep sequencing (CAPP-Seq) in combination with molecular barcoding technologies that improve sensitivity by reducing the background sequencing error . These two methods improve the sensitivity of CAPP-Seq up to three-fold and, when combined with molecular barcoding, yield approximately 15-fold improvements . cfDNA enrichment is conducted by a two-step PCR procedure during the sample preparation process. The first PCR amplifies the mutational hotspot regions of several genes in a single tube. The second PCR is a nested PCR with unique barcoded primers for sample labeling. The final PCR products are pooled and partitioned for sequencing. The advanced dPCR assay (BEAMing) is a highly sensitive approach with a detection rate of 0.02% . This approach consists of four principal components: beads, emulsion, amplification, and magnetics. BEAMing combines dPCR with magnetic bead and flow cytometry . In BEAMing, the primer binds to the magnetic beads using a reaction that forms a biotin-streptavidin complex. Less than one template molecule and less than one bead are contained within the microemulsion, and the PCR is performed within each droplet. At the end of the PCR process, the beads are magnetically purified. After denaturation, the beads are incubated with oligonucleotides to distinguish between different templates. The bound hybridization probe is then labeled with a fluorescently labeled antibody. Finally, the amplified products are counted as fluorescent beads by flow cytometry. However, the BEAMing method is impractical for routine clinical use due to its workflow complexity and high cost .
The mass spectrometry-based method combines the matrix-assisted laser desorption/ionization time-of-flight mass spectrometry with a conventional multiplex PCR. An example of this method is UltraSEEK (Agena Bioscience, San Diego, CA). UltraSEEK consists of two-step PCR for amplification and mass spectrometry for detection. The two-step PCR step consists of a multiplex PCR followed by a mutation-specific single-base extension reaction. The extension reaction uses a single mutation-specific chain terminator labeled with a moiety for solid phase capture. Captured, washed, and eluted products are examined for mass, and mutational genotypes are identified and characterized using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry . UltraSEEK has the advantage of multiplex detection of mutant sequences simultaneously, and has a MAF of 0.1% .
NGS, also known as massively parallel sequencing technology, can characterize cancer at the genomic, transcriptomic, and epigenetic levels. NGS is a highly sensitive assay that can detect mutations in MAF of < 1% using the latest platforms . NGS can analyze several million short DNA sequences in parallel and conduct sequence alignment or de novo sequence assembly to the reference genome . Depending on the panel configuration, NGS panels can be targeted to analyze known variants or untargeted to screen unknown variants. Target panels were preferred due to their high sensitivity and low cost but are limited to point mutations and indel analysis. Several NGS methods can be applied to target panels with adjustable sensitivity, including tagged amplicon deep sequencing, the safe sequencing system, and CAPP-Seq. On the other hand, WGS or WES using untargeted panels allow detection of unknown DNA variants throughout the entire genome (or exome). Different genome-wide sequencing methods have been proposed for different variation types, such as personalized analysis of rearranged ends, digital karyotyping, and Fast Aneuploidy Screening Test-Sequencing System . However, genome-wide sequencing requires a large sample, making its application for ctDNA difficult due to the low concentrations of ctDNA in samples. There have been attempts to analyze DNA fragmentation differences. There is a marked difference in fragment length size between ctDNA and normal cf DNA. The fragment length of ctDNA is consistently shorter than that of normal cfDNA . Besides, ctDNA with a low MAF (< 0.6%) is associated with a longer ctDNA fragment length when compared to normal cfDNAs . Moreover, most cancers of different origins showed fragmentation profiles of varying lengths . The characteristic DNA fragmentation provides a proof-of-principle approach applicable to screening, early detection, and monitoring of various cancer types. NGS application has been extended to microsatellite inst-ability (MSI) detection . Loss of DNA mismatch repair (MMR) activity leads to an accumulation of mutations that could otherwise be corrected by MMR genes. A deficiency in MMR activity is often caused by germline mutations or aberrant methylation. The MSI phenotype of a deficiency in MMR activity refers to the shortening or lengthening of tandem DNA repeats in coding and noncoding regions throughout the genome. Tumors with at least 30% to 40% of unstable microsatellite loci, termed microsatellite instability high (MSI-H) , reportedly have a better prognosis than MSS tumors and tumors with low MSI. MSI has been documented in various cancer types, including colon, endometrium, and stomach cancers . The FDA has approved Pembrolizumab to treat MSI-H cancer regardless of the tumor type or site . NGS-based methods utilize various MSI detection algorithms such as MSIsensor , mSINGS , MANTIS , and bMSISEA , which have demonstrated concordance rates ranging from 92.3% to 100% with the PCR-based method. The application of NGS can reliably detect the MSI status with a ctDNA fraction up to 0.4% .
Epigenetic information such as methylation is more specific to the tissue of origin than genetic mutations . Changes in DNA methylation patterns occur early in tumor development and have been reported to help early screening for cancers of unknown origin . Methylation analysis is not routinely or commonly used to detect ctDNA, but it can be partially applied to cancer patients. The method can be broadly divided depending on its application to the candidate gene. The Grail’s technology applied DNA methylation patterns to differentiate between cancer cell types or tissue origins . Most cfDNA methylation analysis methods applied a candidate gene approach due to the low analytical cost and the efficiency of using pre-established epigenetic biomarkers . Bisulfite treatment-based assays distinguish cytosine methylation and are generally the preferred ctDNA methylation detection method . The analytical principle is based on treating the DNA with bisulfite to convert unmethylated cytosine residues to uracil. Two types of ctDNA methylation analysis exist; PCR-based methods that apply specific primers or melting temperatures and sequence-based methods such as direct sequencing or pyrosequencing. However, the accuracy of bisulfite pyrosequencing is only maintained up to 5% . Methylation-specific PCR (MSP) can distinguish DNA sequences by sequence-specific PCR primers after bisulfite conversion . The methylation-sensitive high-resolution melting (MS-HRM) protocol is based on comparing the melting profiles of the PCR products from unknown samples with profiles for specific PCR products derived from methylated and unmethylated control DNAs . The protocol consists of PCR amplification of bisulfite-modified DNA with primers and subsequent high-resolution melting analysis of the PCR product. MSP or MS-HRM can accurately detect about 0.1% of methylated DNA .
The nCounter Technology (NanoString Technologies, Seattle, WA) is a novel technology developed to screen clinically-relevant ALK , ROS1 , and RET fusion genes in lung cancer tissue samples. NanoString is applicable to RNA, miRNA, or protein and, more recently, to ctDNA . Target ctDNA is directly tagged with capture and reporter probes that are specific to the target variant of interest, creating a unique target-probe complex. The probes include a fluorescent reporter and a secondary biotinylated capture probe that allows immobilization onto the cartridge surface. The target-probe complex is immobilized and aligned on the imaging surface. The labeled barcode of the complex is then directly counted by an automated fluorescence microscope .
The availability of new ctDNA testing methods and continuous scientific advances has resulted in several new problems. The factors affecting ctDNA testing outcomes are present from the sample collection phase to the final reporting phase. The American Society of Clinical Oncology (ASCO) and the College of American Pathologists (CAP) reviewed the framework for future research into clinical ctDNA tests in 2018 . This article categorizes the key findings that affect ctDNA testing for oncology patients into preanalytical variables for ctDNA specimens, analytical validity, interpretation and reporting, and clinical validity and utility of each test. Plasma is the most suitable sample recommended for ctDNA testing , as is the use of specific types of sample collection tubes such as cell-stabilizing tubes (Cell-Free DNABCT [STRECK tubes] and PAXgene Blood DNA tubes [Qiagen]), or conventional EDTA anticoagulant tubes . Leukocyte stabilization tubes can extend the preprocessing window to 48 hours after collection, but EDTA anticoagulant tubes require processing within 6 hours. However, few studies have examined the preanalytical variables affecting ctDNA testing, and guidelines are needed to validate their clinical utility. Considering the variations in the many factors and different types of ctDNA assays, based on different methods, the validity of each analysis must be comparable. The current clinical ctDNA analyses require a clear assessment of the validity of the individual analyses. To increase the precision of ctDNA assays, best practices, protocols, and quality metrics for NGS-based ctDNA analyses must be developed. The Sequencing Quality Control Phase 2 (SEQC2) consortium organized by the FDA is an international group of members from academia, government, and industry ( https://www.fda.gov/science-research/bioinformatics-tools/microarraysequencing-quality-control-maqcseqc#MAQC_IV ). The SEQC2 Oncopanel Sequencing Working Group developed a translational scientific infrastructure to be applied for practices in precision oncology . The Oncopanel Sequencing Working Group evaluated panels/assays, genomic regions, coverage, VAF ranges, and bioinformatics pipelines, using self-constructed reference samples. This study on the analytical performance evaluation of oncopanels/assays for small variant detection includes: (1) comprehensive solid tumor oncopanel examination , (2) liquid biopsy testing , (3) testing involving formalin-fixed paraffin-embedded material , and (4) testing involving spike-in materials . A major finding from the SEQC2 liquid biopsy proficiency testing study is that all assays could detect mutations with high sensitivity, precision, and reproducibility for those above the 0.5% VAF threshold. The degree of DNA input material impacted the test sensitivity, requiring higher input for improved sensitivity and reproducibility for variants with a VAF below 0.5%. Advanced NGS-based assays for precision oncology are in high demand, and recently approved ctDNA assays are to be identified. Establishing a proper validation scheme would support the FDA’s regulatory and scientific endeavors.
We searched for FDA-approved assays in the FDA database ( https://www.accessdata.fda.gov/scripts/cdrh/devicesatfda/index.cfm ) using the following keywords: circulating tumor DNA, ctDNA, cell-free DNA, circulating cell-free DNA, cfDNA, liquid biopsy, and plasma and DNA. The search results were compared to the annual report of medical devices cleared or approved on FDA lists published between 2013 and 2022 for confirmation, and assays related to ctDNA were selected. We identified three in vitro diagnostic devices (Epi ProColon, Cobas EGFR Mutation Test, and therascreen PIK3CA RGQ PCR Kit) and two specialized laboratory services (Guardant360 CDx, and FoundationOne Liquid CDx) . FoundationOne Liquid CDx was approved as a companion diagnostic on October 26 and November 6, 2020. The approved companion diagnostic indications are (1) to identify mutations in BRCA1 and BRCA2 genes in patients with ovarian cancer eligible for treatment with rucaparib (RUBRACA, Clovis Oncology, Inc.), (2) to identify ALK rearrangements in patients with non–small cell lung cancer eligible for treatment with alectinib (ALECENSA, Genentech USA Inc.), (3) to identify mutations in the PIK3CA gene in patients with breast cancer eligible for treatment with alpelisib (PIQRAY, Novartis Pharmaceutical Corporation), and (4) to identify mutations in the BRCA1 , BRCA2 , and ATM genes in patients with metastatic castration-resistant prostate cancer eligible for treatment with olaparib (LYNPARZA, AstraZeneca Pharmaceuticals LP) . The NGS-based ctDNA tests related to companion diagnostics, such as FoundationOne Liquid CDx and Guardant360 CDx are transitioning to specialized laboratory services. FDA-approved tests require evaluations of their analytical performance. Recent laboratory-based tests have undergone extensive evaluation testing using large sample numbers for advanced assay interpretation and reporting, clinical validation, and utility . Considering the cost of the tests, the number of tests performed for evaluation is prohibitive for small laboratories. Specialized laboratories use their own processes to provide users with reports. Therefore, testing is changing from a complex assay performed at individual laboratories to a more specialized service where each specialized laboratory be devised its own analysis processes, and the type of inspection changes to a laboratory service.
In May 2017, the Conformité Européenne (CE) declared the strengthening of in vitro Diagnostic Regulation and Medical Device Regulation regulations. The transition was completed in May 2022, following a 5-year transition period. The CE announced a new database search service called the European Database on Medical Devices (EUDAMED), which will consist of six modules: actor registration, unique device ID and device registration, notification authority and certificate, clinical and performance research, and alert and market monitoring. The EUDAMED database was scheduled for release in July 2022 but has been postponed to the third quarter of 2024 due to a delay in the module development process. We had difficulty performing a systematic search for medical devices or in vitro diagnostics with the CE mark. Therefore, we searched for recently published papers that mentioned CE-marked products .
In Korea, the marketplace of ctDNA test has recently expanded as the In Vitro Diagnostic Medical Devices Act was promulgated in April 2019. When ctDNA test was searched in the medical device database of the Korea Ministry of Food and Drug Safety ( https://udiportal.mfds.go.kr/search/data/P02_01#list ), a total of seven domestic tests were identified. The Smart Biopsy EML4-ALK Detection Kit (CytoGen, Seoul, Korea) was first nationally accredited on April 20, 2016, and the following tests have been nationally accredited in sequence; ADPS EGFR Mutation Test Kit V1 (GENECAST, Seoul, Korea), Droplex KRAS Mutation Test v2 (Gencurix, Seoul, Korea), Droplex PIK3CA Mutation Test (Gencurix), PANAMutyper R EGFR V2 (PANAGENE, Daejeon, Korea), AlphaLiquid 100 (IMBDX, Seoul, Korea), and LiquidSCAN (GENINUS, Seoul, Korea). Most tests are PCR-based assay, such as RT-PCR and dPCR, but AlphaLiquid 100 (IMBDX) and LiquidSCAN (GENINUS) are NGS-based assay.
The introduction of ctDNA testing and technical advancements in NGS have affected cancer’s diagnostic and therapeutic aspects. Many biomarkers associated with treatment options have been identified for cancer patients whose tissues were previously unavailable for biopsies. The widespread use of NGS and its increased availability has changed the concept/scheme of companion diagnostics from ‘one gene-one drug’ to ‘multi-genes - multi-drugs’ treatment . Recently, experts from the National Comprehensive Cancer Network have recommended measuring multiple predictive genes associated with companion diagnostics for certain cancers . Although tissue biopsy remains the standard of diagnosis because of its important pathological diagnostic information value and the need to assess biomarkers without DNA alterations, such as estrogen receptor expression and other protein or RNA biomarkers . However, ctDNA testing is undoubtedly a very promising technology, with broad clinical applications for early diagnosis, monitoring, management, and prognosis . When performing metastatic diagnosis alongside standard tissue biopsies, ctDNA testing can provide key advantages, either as a baseline for follow-up testing after treatment or in situations in which more-rapid identification of targetable alterations is needed to guide first-line therapy . In addition, ctDNA testing plays an important role in real-time monitoring of various aspects of tumors due to its simple sample preparation . We expect that the strengths of ctDNA, including the potential ability to detect latent cancers and track tumor-specific mutations, will naturally enable minimal residual disease (MRD) assessment . The ability to identify microscopic residuals and occult metastases could revolutionize the individualization of adjuvant and consolidation therapy . Despite the potential use of ctDNA to determine MRD, it is premature for use in this feature due to many current issues . Therefore, the reliability and clinical validity of ctDNA analysis is becoming increasingly important as it can directly impact patient care with respect to treatment options. To assess the current status of ctDNA testing and ongoing developments, we searched the clinical trial database of the FDA ( Clinical trial.gov ). A query using ctDNA as the keyword showed 978 clinical trials as of June 2022. The results of 109 trials in which the clinical trials were completed were reviewed using the uploaded articles from the database or the national clinical trial number for a PubMed search. Twenty-two clinical trials were available, and we reviewed and compared their preanalytical and analytical variables . The preanalytical variables of the blood collection tube used, the volume of whole blood collected, time to sample processing, centrifugation protocols, and DNA extraction methods were missing or unidentifiable in over half of the reports, despite their importance. When provided, the information varied among studies; specimen processing within 24 hours using EDTA tubes was a possible confounding factor regarding the stability of the ctDNA. Some trials requiring detection of low VAF variants, such as ‘using copy number variation of ctDNA for cancer diagnosis’ or ‘biomarker response according to treatment in metastatic cancer’ used only 25 ng DNA, which appears to be insufficient and, therefore, they were unable to exclude the possibility of false negatives. The use of FDA-approved assays among trials was low at 13.64% (3/22). Information regarding approval from other institutions or agencies was often unavailable, but most clinical trials (72.73%, 16/22) utilized non–FDA-approved testing methods. Despite the considerable therapeutic influence of ctDNA testing or companion diagnostics, the current practice of utilizing various ctDNA tests without regard to a consensus on clinical validation is questionable, as the review of clinical trials and available information demonstrates. Discrepant test results between the tissue biopsy and ctDNA results are common, and the underlying reasons for these discrepancies include temporal heterogeneity (an archival tumor specimen), spatial heterogeneity (a subclonal mutation), and analytical errors . In the case of analytical errors, the source of the error should be evaluated before any therapeutic action can be taken. If such an investigation or validation is lacking, this should be disclosed to enable the participants or patients to give proper informed consent. The common rules followed by institutional review boards (IRB) when reviewing research state that the prospective participants (or a legally authorized representative) be provided with sufficient detailed information regarding the research. The consent form containing this information must be organized to facilitate an understanding of why one might or might not want to participate . This should also be the case if the patient opts for a ctDNA test. Patients should be able to choose the ctDNA test based on detailed information about the accuracy of the test, the list of genes that can be analyzed, and the laboratory’s ability to analyze mutations based on experience. Laboratories must be aware of any new developments in ctDNA testing. A changing trend in ctDNA testing is demonstrated by the recent FDA-approved ctDNA assays. Previously FDA-approved assays were mostly in vitro diagnostic devices (IVDs), conducted by small-scale clinical laboratories. However, recent FDA-approved assays require referrals to larger, specialized laboratories with institutional accreditation. Such changes are inevitable due to the testing complexity and higher reliability required by clinical practice. Cutting-edge ctDNA testing is costly and requires first-rate laboratory infrastructure and highly specialized and multi-disciplinary professionals. The trend toward centralization and referrals is in line with these requirements. Previously, the introduction of tumor markers has resulted in the overutilization of tumor marker testing in the hope of providing definitive answers to cancer diagnostics. The need for specific guidelines/instructions on how tumor markers should be utilized demonstrates the concern regarding the misuse of tumor markers, potentially resulting in misdiagnosis or a delay in treatment . In recognition of these issues, in 2002, the National Academy of Clinical Biochemistry produced the Laboratory Medicine Practice Guideline of tumor biomarkers . The guideline provides recommendations based on expert opinions from those in the field of IVD and the marketplace. This regulatory guideline includes 16 different cancers and their established tumor markers, their qualities, and the technological requirements. It is anticipated that a similar development to the guideline for tumor marker testing will be available for ctDNA testing soon. However, guidelines on validation are currently lacking. ctDNA testing requires clinical validation prior to its clinical implementation. These regulated clinical validation guidelines will inevitably require updating, refinement, and modification as knowledge and understanding of ctDNA and its biological role increases. In summary, ctDNA testing requires a minimum safety resolution through clinical validation to ensure its clinical utility. The testing requires cooperation between multi-disciplinary experts to provide meaningful and reliable results. Establishing a proper clinical validation guideline for ctDNA will enable access to better cancer treatment and reliable testing in the future.
|
Creation of low cost, simple, and easy-to-use training kit for the dura mater suturing in endoscopic transnasal pituitary/skull base surgery
|
7d2ee495-70cb-4400-8b54-d10b4d42c210
|
10101945
|
Suturing[mh]
|
Just fewer than 3000 cases of pituitary surgery are performed in Japan each year; among them, endoscopic transnasal transsphenoidal pituitary/skull base surgery (eTSS) is the most common, with the number of cases increasing every year . In most cases, the surgeons make an incision through the pituitary dura and remove the tumor. If there is no arachnoid damage and no cerebrospinal fluid (CSF) leakage, it is empirically sufficient to apply fibrin glue after packing the fat in the excised cavity. However, in situations when intraoperative CSF leakage is observed, that method alone is not sufficient. Typical reconstruction methods include dural closure using various patch grafts including fascia patch grafts – , and covering the defect with a vascularized pedicled nasoseptal flap – . The former includes the gasket-seal method, but it cannot be used unless the bone margin around the dural defect remains. The AnastoClip has been reported as a simple method for dura mater suturing in eTSS , but it has several limitations. The device is expensive and cannot be clipped properly without sufficient suture allowance. Therefore, classical suture techniques using a needle and thread are often necessary, but they are difficult due to the need for deep manipulation and require practice to master the technique. Training kits for practicing laparoscopy with deep suturing under the endoscope can be constructed on our own at a low cost , or they can be purchased commercially , . Meanwhile, no similar eTSS practice kits are available in the market. The creation of low cost training kit for suturing in eTSS was reported , but it has the drawback of being unrealistic. Pituitary surgery simulators exist for purposes other than dural suturing training, but they are both large and expensive – , which is a barrier to skill acquisition. Therefore, this study aimed to create a dedicated dural suturing training kit for eTSS, which is as close to the real as possible, at the lowest possible cost.
The policy was designed to keep other costs as low as possible based on the premise that most surgeons would own electronic devices, such as PCs and monitors. Most of the necessary items were purchased at the 100-yen store ($1 store) or everyday items were used. Table lists the items required to make the practice kit and their application. The images were captured using a stick-type camera (approximately $40, BOEOC, Guangdong, China) and were connected to each individual’s electronic device for projection. The camera, which has a magnification range of 10× to 200×, an adjustable focal range from 10 to 500 mm, and a resolution of up to 1280 × 720 pixels, is also equipped with a built-in LED light that can be adjusted for brightness. The surgical instruments required were either the surgeons’ own needle-holder or inexpensive forceps for aquatic plant care (Gex, Osaka, Japan). Surgipro™ II (Covidien Japan, Tokyo, Japan) was used for suturing.
Concept For eTSS, the EndoArm endoscope system (Olympus, Tokyo, Japan) is used with a camera with a 0° field of view for manipulation in the nasal cavity and sphenoid sinus at the Nippon Medical School Hospital. A camera with a 30° field of view is used for manipulation inside the sella turcica (Fig. A). Simultaneously, the endoscope is placed at the lower end of the nasal cavity to ensure the surgical instruments do not interfere with each other. The patient’s head position is elevated and adjusted so that the insertion angle of the device into the nasal cavity is about 30° from the horizontal plane (Fig. B). The 30° viewing angle projects almost horizontally, so that the goal can be achieved by positioning the stick-type camera as shown in Fig. C. Assembly To assemble the basic framework, the cutting board stand was prepared, and two doorstops (15° tilt) and two 50 cc syringes were cut (Fig. A). Following this, the items were assembled (Fig. B). The initial concept was achieved when the stick-type camera was installed. The remaining materials were prepared and cut as shown in Fig. A. The bottom of the measuring spoon was hollowed out, and the measuring part was painted white to reproduce the sphenoid sinus. These materials were then connected to the basic framework (Fig. B). The total cost was less than $10. Figure A shows the training procedure. Placing the paper straw in the position where the endoscope is normally located ensures that operability will be closer to that of actual surgery. The corridor of the instrument is narrower due to the syringe and the measuring spoon, which makes it closer to a genuine surgical situation. The image captured by the stick-type camera is shown in Fig. B. The 30° view looks up at the dura mater of the sella turcica from below, compared with the 0° view, which is when the endoscope is inserted through the nasal cavity. Training The completed training kit was used to perform the suture practice. Like actual endoscopic surgery, the image displayed on the monitor is flat; hence, the experience is required for mastery but with practice, the surgeons become familiar with the device. Two typical practice scenes are shown in Fig. A,B. Figure A shows a situation where the suture of the pituitary dura mater is incised in a single horizontal letter. Practicing this, for example, is useful for dural suturing in fenestration surgery for Rathke’s cleft cyst. The easy slip-knot technique was used, in which the suture is passed through the dura mater, ligated outside the body, and sent to the deep surgical field . Figure B shows an extended transsphenoidal surgery field with a wide incision in the pituitary dura. The dura is frequently shrunk in these conditions, and suturing the dura mater together is usually impossible; therefore, a free graft, such as the fascia lata, is placed on the underlay and sutured. This time, a nitrile glove cut into a square is used as a substitute for a free graft; hence, the sensation is slightly different from the actual fascia lata graft, but the technique is useful for suturing the graft and dura mater.
For eTSS, the EndoArm endoscope system (Olympus, Tokyo, Japan) is used with a camera with a 0° field of view for manipulation in the nasal cavity and sphenoid sinus at the Nippon Medical School Hospital. A camera with a 30° field of view is used for manipulation inside the sella turcica (Fig. A). Simultaneously, the endoscope is placed at the lower end of the nasal cavity to ensure the surgical instruments do not interfere with each other. The patient’s head position is elevated and adjusted so that the insertion angle of the device into the nasal cavity is about 30° from the horizontal plane (Fig. B). The 30° viewing angle projects almost horizontally, so that the goal can be achieved by positioning the stick-type camera as shown in Fig. C.
To assemble the basic framework, the cutting board stand was prepared, and two doorstops (15° tilt) and two 50 cc syringes were cut (Fig. A). Following this, the items were assembled (Fig. B). The initial concept was achieved when the stick-type camera was installed. The remaining materials were prepared and cut as shown in Fig. A. The bottom of the measuring spoon was hollowed out, and the measuring part was painted white to reproduce the sphenoid sinus. These materials were then connected to the basic framework (Fig. B). The total cost was less than $10. Figure A shows the training procedure. Placing the paper straw in the position where the endoscope is normally located ensures that operability will be closer to that of actual surgery. The corridor of the instrument is narrower due to the syringe and the measuring spoon, which makes it closer to a genuine surgical situation. The image captured by the stick-type camera is shown in Fig. B. The 30° view looks up at the dura mater of the sella turcica from below, compared with the 0° view, which is when the endoscope is inserted through the nasal cavity.
The completed training kit was used to perform the suture practice. Like actual endoscopic surgery, the image displayed on the monitor is flat; hence, the experience is required for mastery but with practice, the surgeons become familiar with the device. Two typical practice scenes are shown in Fig. A,B. Figure A shows a situation where the suture of the pituitary dura mater is incised in a single horizontal letter. Practicing this, for example, is useful for dural suturing in fenestration surgery for Rathke’s cleft cyst. The easy slip-knot technique was used, in which the suture is passed through the dura mater, ligated outside the body, and sent to the deep surgical field . Figure B shows an extended transsphenoidal surgery field with a wide incision in the pituitary dura. The dura is frequently shrunk in these conditions, and suturing the dura mater together is usually impossible; therefore, a free graft, such as the fascia lata, is placed on the underlay and sutured. This time, a nitrile glove cut into a square is used as a substitute for a free graft; hence, the sensation is slightly different from the actual fascia lata graft, but the technique is useful for suturing the graft and dura mater.
In this study, a low cost, simple, and easy-to-use training kit for dural suturing in endoscopic transnasal pituitary surgery was created. This total cost for practice was less than $50, including the cost of stick-type camera. Despite minor differences, such as the nasal cavity not extending similar to live patient, a situation was created that was almost identical to real surgery. Parafilm was used as the material for the dura mater, which was easily removable, and the feel of penetrating the needle was very close to the real dura mater. However, when tugged strongly with forceps, it has the disadvantage of stretching and deforming; hence, other good materials would be a better solution. The kit is useful for practicing dural sutures, but since the dural suture of the sella turcica often cannot be made completely watertight, auxiliary materials such as fat grafts and fibrin glue are needed to prevent spinal fluid leakage in real surgery. When the pituitary dura is sutured, the endoscopic camera with a 30° field of view is placed at the lower end of the nasal cavity. Some pituitary surgeons use a 0° endoscope instead of a 30° endoscope. In these cases, trainees can remove the paper straw and insert a stick-type camera through the syringe that imitated the nasal cavity. In other words, this training kit can be used both in 0° and 30° fields of view. Although this training kit is low cost, simple, and easy-to-use, the angles are carefully calculated just like an actual surgery. EndoArm is mainly used with a camera with 0° and 30° field of view in eTSS; moreover, a camera with 70° field of view is occasionally used for observations toward the anterior skull base. As shown in Fig. , this training kit can also simulate the vision of EndoArm with a camera with 70° field of view. Therefore, the ability to observe at the same angle as an actual surgical endoscope suggests that this simple and easy-to-use training kit can not only be used for suturing exercises but also for developing good surgical instruments for suturing. In other words, although there are reports of the development of surgical equipment using expensive models , , there is a possibility that development can be done without using such high-class models. Needle holders for the dura mater suturing in eTSS are very expensive, but if they are just used for practice, inexpensive forceps can be substituted if they have a firm grip on the forceps tip. Although low cost needle holders would be sufficient for training, the feel would be different, indicating that it is preferable to use the same ones in practice as in real surgery, if possible. Similarly, it is better to use the same sutures in training as in actual surgery. When suturing with the easy slip-knot method every time in training, the suture becomes shorter and shorter, which is a problem. If a large number of sutures were available, it would be preferable; however, although it is difficult to obtain, training can still be achieved with just one needle. The practice can be divided into a process for needle penetration and a process for ligation using the easy slip-knot method. For the ligation process alone, inexpensive threads, such as kite string, can be used for practice. The training kit in this study was created by neurosurgeons who are already proficient in pituitary surgery. Therefore, there is a limitation that we could not evaluate the practice effect in inexperienced young surgeons. Measuring the time required for suturing before and after training may be preferred; however, it may not be appropriate because the suture time in real surgery is influenced by anatomical factors such as the size of the patient's nasal corridor and it is more important that the accurate sutures are reliable than fast procedures. However, it is expected that practicing with this kit will enable inexperienced young surgeons to become familiar with the deep suture operation. Despite inherent subjectivity, we experienced reduced stress while performing suturing in real surgery, and suturing time was shortened by practicing with this training kit (data not shown). In conclusion, a simple and easy-to-use training kit for dural suturing in eTSS was successfully made at a little expense. We hope that this kit will be widely used in the future and that many pituitary surgeons will practice suturing, which will help raise the level of pituitary surgery. Additionally, it is expected that this kit can be used for developing surgical instruments without using high-class models.
|
Plant chemical variation mediates soil bacterial community composition
|
8b047a40-41ea-4096-adea-9c11307eeccd
|
10102019
|
Microbiology[mh]
|
Understanding what controls the structure and function of terrestrial ecosystems has been greatly enhanced by considering aboveground (plant-based) and belowground (detritus-based) food chains as coupled systems . This conception has given rise to the appreciation that variation in plant functional traits (e.g., nutrient content and anti-herbivore defense expression) can determine variation in the community composition of different trophic compartments (i.e., microbial decomposers, herbivores, carnivores) within ecosystems – . Compounding this complexity is the growing realization that intraspecific variation in plant functional traits can explain as much variation in food web structure and ecosystem functioning as interspecific plant trait variation – . But understanding the community- and ecosystem-wide consequences of intraspecific variation in plant trait expression remains rudimentary , , ; especially how soil bacterial communities and their functioning might respond to variation in plant traits . We report here on an experiment aimed at understanding how intraspecific variation in the nature and concentration of plant volatile chemicals that ward off insect herbivory affect soil microbial communities and their decomposition of plant litter containing volatile chemicals. The study is motivated by previous evidence that interspecific variation in plant chemical defense composition (aka plant chemotype) can influence the trophic structure of food-webs – . Our previous work, in particular, demonstrated that plant chemotype can determine both arthropod and soil microbial communities . This study complements that work by resolving how plant chemotype can alter soil microbial community composition. We test whether soil microbial community composition is shaped most by the original plant chemotype with which the microbes are naturally associated or by differences in litter inputs from alternative chemotypes using chemotypes of the perennial herb Tansy ( Tanacetum vulgare ) as our system of study. Our research combined the use of next-generation DNA sequencing (16S rRNA gene amplicon sequencing) to assess soil microbial community composition with a litter decomposition experiment to address the following questions: (1) Does a soil microbial community associated with a particular plant chemotype have a different ability to decompose litter from its own chemotype vs litter from another chemotype? (2) Does soil microbial diversity change when subjected to its own chemotype’s litter vs. another chemotyope’s litter?
Study system Tansy ( T. vulgare ) is a perennial plant originating in Europe and Asia . Large populations can be found in disturbed, well-drained, nutrient poor soils , where it often forms isolated patches. It also frequently occurs alongside river valleys, railway tracks and on abandoned lands. Tansy genotypes can be classified according to their volatile chemical content (chemotypes): most frequent are β-thujon, camphor, and borneol . Breeding experiments with these chemotypes using molecular markers have confirmed that the volatile chemical content of a particular Tansy plant is determined genetically , . Tansy chemotypes determine their associated arthropod communities that include three specialised aphid species ( Macrosiphoniella tanacetaria (Kaltenbach), Metopeurum fuscoviride Stroyan and Uroleucon tanaceti L.) and many predators specialised on Tansy aphids, the most important being the 7-spotted ladybird beetle ( Coccinella septempunctata ), the generalist nursery web spider ( Pisaura mirabilis ) and the minute pirate bug ( Orius spp.) . Together, these properties of Tansy make an ideal model system for studying effects of intraspecific plant variation on ecosystem functions. The experiment reported here used individuals drawn from Tansy populations that belong to different genetic types with different chemical defense profiles (chemotypes) . These chemotypes were identified in previous work which surveyed and evaluated the chemical composition and genotypes of 100 tansy plants from populations along a 120 km transect in Transylvania, Central Europe . That previous survey revealed that chemotypes where comprised of different compositions of four key volatile chemicals: (1) Camphor (2) Borneol (3) Carvone, (4) β-Thujon (see for details). We used soil and litter associated with hybrid chemotypes that were comprised of a mixture of 40% or more of a dominant volatile chemical and 20% or less of the other volatiles. For example, a hybrid with 40% or more Camphor comprised the Camphor treatment, a hybrid with 40% β-Thujon comprised the Thujon treatment, etc. (Fig. ). When possible, we used litter and soils from multiple individual plants of each chemotype taken from points along the 120 km transect. We obtained soils and litter associated with Camphor, Borneol and Thujon hybrids (n = 3 plants for each hybrid chemotype), and Carvone hybrid (n = 1 plant). We collected soils associated with the individual plants by extracting soil from a 50 cm diameter area around each plant to a 15 cm depth. This soil horizon contained 3.26% humus, a mobile potassium content of 408 ppm and nitrogen which varied between 0.143% and 0.101%. The base saturation of the upper layer was 77.85%, and the pH (H2O) 6.38 . We collected aboveground biomass of each individual plant by clipping them at the soil surface. Litter decomposition experiment The litter decomposition experiment evaluated how soil and litter from each chemotype shaped the soil microbial community. We further evaluated whether transplanting litter from a chemotype to soils associated with another chemotype influenced the microbial community. We deployed a factorial design, crossing soil and litter sourced from each of the four hybrid chemotypes plus the control (Fig. ). We created treatment soils (Fig. ) by bulking and homogenizing soil from the replicate hybrids plants for a chemotype treatment. We also collected and homogenized leaf material from each of the treatment chemotypes for the decomposition assay. We further created a control by collecting and homogenizing soil and plant material from field locations covered in monocots without tansy plants. Thirty kilograms of soil from each hybrid chemotype or control were filled into five individual 40 × 40 × 30 cm boxes per chemotype (Fig. ). We put a homogenized mixture of 33 g of litter and 66 g of soil from each chemotype or control into individual standard 0.2 mm mesh litterbags . We added the soil to each litter bag that was from the same chemotype as the litter soil. We created 5 replicate litterbags for each chemotype or control for each soil treatment (n = 125 litter bags with n = 25 litter bags per each of the 5 soil treatments or control) At the end of November 2020, we buried the five replicate litter bags for each litter-soil treatment combination 10 cm below the soil surface within each box (Fig. ). All boxes were kept outdoors under natural conditions until the end of May 2021. Litterbags were then collected from each chemotype box and samples were placed into sterile tubes and stored at − 70 °C until subject to DNA analyses. Total genomic DNA was extracted with the DNeasy PowerSoil Pro Kit (Qiagen) from the mixture of litter and soil remaining in each buried litter bag in May 2021. Then, the V3-V4 region of the 16S rRNA gene was amplified with Bacteria-specific PCR using the following primers: B341F (5′-CCT ACG GGN GGC WGC AG-3′ ; and 805NR (5′-GAC TAC NVG GGT ATC TAA TCC-3′ . DNA sequencing was conducted by the Genomics Core Facility RTSF of the Michigan State University (USA) on a standard MiSeq v2 flow cell (Illumina) in a 2 × 250 bp paired end format using a v2, 500 cycle MiSeq reagent cartridge. Sequences analysis was performed with mothur v1.44.3 , while read alignment and taxonomic assignment were carried out using the ARB-SILVA SSU Ref NR 138 reference database applying operational taxonomic units (OTUs) at a traditional 97% cutoff. A total of 852,130 high-quality reads were obtained in this project, an average of 34,085 read/sample. Data analyses Microbial community data were rarefied to 19,000 reads per sample before we created an average distance matrix for analysis using 100 random draws from each of our sequenced communities (n = 25). First, the 13 dominant bacterial phyla and genera were compared between plant chemotypes and the control; here, only proportional differences of bacterial distributions were presented between samples using microbial sequences data. Then, we produced diversity profiles of the entire set of bacteria genera (i.e., all OTUs) to examine differences in the community diversity in different soil and litter combinations. Next, we used non-Metric Multidimensional Scaling (NMDS) to compare the composition of bacterial phyla and genera and tansy chemotype. Groupings were based on relative proportions of different chemical volatiles in each Tansy plant. Finally, we tested for a significant effect of soil type and litter type on the bacteria community using the multivariate analysis of variance (vegan::adonis2). Analyses were run in R Studio v0.97.314 using R v3.0.1 (R Core Team 2013). Permit statement Experimental research and field studies on tansy, including the collection of plant material, complied with institutional, national, and international guidelines and legislation. Permissions were not required for Tanacetum vulgare collections because tansy is a wild weed with moderate expansion in Transylvania, included between plants that has to be controlled with plant protection methods. Voucher specimens were not deposited as only leaves and stems were collected for analyses, entire specimens were not collected.
Tansy ( T. vulgare ) is a perennial plant originating in Europe and Asia . Large populations can be found in disturbed, well-drained, nutrient poor soils , where it often forms isolated patches. It also frequently occurs alongside river valleys, railway tracks and on abandoned lands. Tansy genotypes can be classified according to their volatile chemical content (chemotypes): most frequent are β-thujon, camphor, and borneol . Breeding experiments with these chemotypes using molecular markers have confirmed that the volatile chemical content of a particular Tansy plant is determined genetically , . Tansy chemotypes determine their associated arthropod communities that include three specialised aphid species ( Macrosiphoniella tanacetaria (Kaltenbach), Metopeurum fuscoviride Stroyan and Uroleucon tanaceti L.) and many predators specialised on Tansy aphids, the most important being the 7-spotted ladybird beetle ( Coccinella septempunctata ), the generalist nursery web spider ( Pisaura mirabilis ) and the minute pirate bug ( Orius spp.) . Together, these properties of Tansy make an ideal model system for studying effects of intraspecific plant variation on ecosystem functions. The experiment reported here used individuals drawn from Tansy populations that belong to different genetic types with different chemical defense profiles (chemotypes) . These chemotypes were identified in previous work which surveyed and evaluated the chemical composition and genotypes of 100 tansy plants from populations along a 120 km transect in Transylvania, Central Europe . That previous survey revealed that chemotypes where comprised of different compositions of four key volatile chemicals: (1) Camphor (2) Borneol (3) Carvone, (4) β-Thujon (see for details). We used soil and litter associated with hybrid chemotypes that were comprised of a mixture of 40% or more of a dominant volatile chemical and 20% or less of the other volatiles. For example, a hybrid with 40% or more Camphor comprised the Camphor treatment, a hybrid with 40% β-Thujon comprised the Thujon treatment, etc. (Fig. ). When possible, we used litter and soils from multiple individual plants of each chemotype taken from points along the 120 km transect. We obtained soils and litter associated with Camphor, Borneol and Thujon hybrids (n = 3 plants for each hybrid chemotype), and Carvone hybrid (n = 1 plant). We collected soils associated with the individual plants by extracting soil from a 50 cm diameter area around each plant to a 15 cm depth. This soil horizon contained 3.26% humus, a mobile potassium content of 408 ppm and nitrogen which varied between 0.143% and 0.101%. The base saturation of the upper layer was 77.85%, and the pH (H2O) 6.38 . We collected aboveground biomass of each individual plant by clipping them at the soil surface.
The litter decomposition experiment evaluated how soil and litter from each chemotype shaped the soil microbial community. We further evaluated whether transplanting litter from a chemotype to soils associated with another chemotype influenced the microbial community. We deployed a factorial design, crossing soil and litter sourced from each of the four hybrid chemotypes plus the control (Fig. ). We created treatment soils (Fig. ) by bulking and homogenizing soil from the replicate hybrids plants for a chemotype treatment. We also collected and homogenized leaf material from each of the treatment chemotypes for the decomposition assay. We further created a control by collecting and homogenizing soil and plant material from field locations covered in monocots without tansy plants. Thirty kilograms of soil from each hybrid chemotype or control were filled into five individual 40 × 40 × 30 cm boxes per chemotype (Fig. ). We put a homogenized mixture of 33 g of litter and 66 g of soil from each chemotype or control into individual standard 0.2 mm mesh litterbags . We added the soil to each litter bag that was from the same chemotype as the litter soil. We created 5 replicate litterbags for each chemotype or control for each soil treatment (n = 125 litter bags with n = 25 litter bags per each of the 5 soil treatments or control) At the end of November 2020, we buried the five replicate litter bags for each litter-soil treatment combination 10 cm below the soil surface within each box (Fig. ). All boxes were kept outdoors under natural conditions until the end of May 2021. Litterbags were then collected from each chemotype box and samples were placed into sterile tubes and stored at − 70 °C until subject to DNA analyses. Total genomic DNA was extracted with the DNeasy PowerSoil Pro Kit (Qiagen) from the mixture of litter and soil remaining in each buried litter bag in May 2021. Then, the V3-V4 region of the 16S rRNA gene was amplified with Bacteria-specific PCR using the following primers: B341F (5′-CCT ACG GGN GGC WGC AG-3′ ; and 805NR (5′-GAC TAC NVG GGT ATC TAA TCC-3′ . DNA sequencing was conducted by the Genomics Core Facility RTSF of the Michigan State University (USA) on a standard MiSeq v2 flow cell (Illumina) in a 2 × 250 bp paired end format using a v2, 500 cycle MiSeq reagent cartridge. Sequences analysis was performed with mothur v1.44.3 , while read alignment and taxonomic assignment were carried out using the ARB-SILVA SSU Ref NR 138 reference database applying operational taxonomic units (OTUs) at a traditional 97% cutoff. A total of 852,130 high-quality reads were obtained in this project, an average of 34,085 read/sample.
Microbial community data were rarefied to 19,000 reads per sample before we created an average distance matrix for analysis using 100 random draws from each of our sequenced communities (n = 25). First, the 13 dominant bacterial phyla and genera were compared between plant chemotypes and the control; here, only proportional differences of bacterial distributions were presented between samples using microbial sequences data. Then, we produced diversity profiles of the entire set of bacteria genera (i.e., all OTUs) to examine differences in the community diversity in different soil and litter combinations. Next, we used non-Metric Multidimensional Scaling (NMDS) to compare the composition of bacterial phyla and genera and tansy chemotype. Groupings were based on relative proportions of different chemical volatiles in each Tansy plant. Finally, we tested for a significant effect of soil type and litter type on the bacteria community using the multivariate analysis of variance (vegan::adonis2). Analyses were run in R Studio v0.97.314 using R v3.0.1 (R Core Team 2013).
Experimental research and field studies on tansy, including the collection of plant material, complied with institutional, national, and international guidelines and legislation. Permissions were not required for Tanacetum vulgare collections because tansy is a wild weed with moderate expansion in Transylvania, included between plants that has to be controlled with plant protection methods. Voucher specimens were not deposited as only leaves and stems were collected for analyses, entire specimens were not collected.
Bacterial communities in the litter bags from different chemotypes had a different composition of phyla (Fig. A) and genera (Fig. B) across our treatments, with changes in the abundance of phyla less pronounced than those in genera. Diversity profiles of bacteria genera revealed that soil and litter combinations had similar species richness (Fig. ). Litter from Camphor plants decomposed in soil taken from underneath Carvone, Thujone, and Borneol plants, but not the controls, had fewer rare species as indicated by lower diversity values as the scale parameter increased (Fig. ). Borneol soil had the largest effect on community evenness (i.e., high scale parameter) with the most diversity retained by litter from Borneol plants when it was decomposed in the soil from beneath Borneol plants. In fact, Borneol litter decomposed in soil from beneath Borneol plants was the only combination where diversity was unambiguously different—higher in this case—than other treatments (Fig. ). Both the source litter and the soil in which it was buried had a significant influence on the composition of the bacterial community. Overall, the soil in which that litter was decomposed had a strong effect on the composition of the bacterial community than by the type of litter being decomposed. This was true for both bacterial phyla and genera (Fig. A,B).
Our experiment revealed significant variation in the bacteria communities decomposing litter from different Tansey chemotypes in a common garden experiment. The variation was driven by both soil and litter sources, indicating that community assembly was significantly affected by both processes. Yet, the soil source played a dominant role, explaining twice the variation in community composition as did litter type. Relative changes in the microbial community across chemotypes could indicate differences in function, but our data do not support this interpretation. For example, we have already demonstrated that plant and soil nitrogen increase from Thujone to Borneol to Camphor plots in the field . Here, we found microorganisms, such as Pseudomonas , Massilia , and Sphingomonas , that have been described as important genera for litter degradation and mineralization. Their role in litter early decomposition has been demonstrated . Yet, their relative abundance, individually or in total, rank sporadically across soil and litter combinations, suggesting a limited link between relative abundance and functional outcomes in the field. So, the microbial community decomposing litter varied by both the soil source and litter type across different chemotypes from the same plant species. This result suggests an important role of plant chemical defense on microbial community composition. However, patterns of diversity and potential links to microbial function were inconsistent. This inconsistency occurred because the ranking of different microbial taxa across chemotypes did not correspond to our understanding of function and neither soil nor litter source blocked together when we sorted treatments by the relative abundance of individual or functionally similar taxa. The significance of the changes in microbial community composition will therefore likely require an analysis of functional outcomes (i.e., nutrient cycling, enzyme activity) to be understood.
|
Application and limitation of a biological clock-based method for estimating time of death in forensic practices
|
cc6f5f71-2500-4afe-92f4-5a8c9567b69e
|
10102023
|
Forensic Medicine[mh]
|
Estimating the time of death, which is often extremely difficult, is one of the most important tasks in forensic practice. To date, numerous methods for estimating the time of death have been developed , . Over the last decade, various innovative techniques, such as tissue nano mechanics , mass spectrometry-based quantitative proteomics , analysis of oral microbiota community and micro-RNA analysis , have been introduced to estimate the postmortem interval, bringing substantial progress into this field. However, most of these methods estimate the time since death, but not estimate the time of death. The current method for estimating the time of death remains unsatisfactory. Advances in chronobiology have brought about great impacts and progress in various medical fields, such as chronopharmacology, chronotherapy and sleep disorder therapy – . Chronobiology can contribute to forensic medicine, especially in the estimation of the time of death. However, the forensic application of chronobiology is quite limited. To our knowledge, there is currently only one report of the application of chronobiology to forensic investigation, in which the time of death was estimated based on the melatonin concentration in pineal body, serum and urine . Therefore, we tried to apply the biological clock to the estimation of the time of death. In 2011, we reported the first forensic application of chronobiology in the estimation of the time of death using a mouse model and applied the method to a few autopsy cases . In our previous report, we used two main oscillator genes, brain and muscle aryl hydrocarbon receptor nuclear translocator-like 1 ( BMAL1 or ARNTL ) and nuclear receptor subfamily 1 group D member 1 (Rev-Erbα, NR1D1 ), in the circadian clock system to read the biological clock in the kidneys, livers and hearts. Since these two clock genes oscillate in opposite phases , , the NR1D1 / BMAL1 ratio amplifies the circadian oscillation of each gene expression . We demonstrated the applicability of our method in forensic practice, but we could not clarify the reliability and limitations of the method, because only a limited number of autopsy cases were examined. Since its development, we have applied the method to our routine practice of estimating the time of death in autopsy cases. In this study, we evaluated our method based on the results of its application to 318 autopsy cases with known times of death in our department. We show the practical applicability and limitations of our method, which estimates the time of death based on the biological clock.
The pattern of clock gene expression in the hearts of autopsy cases The NR1D1/BMAL1 ( N/B ) and BMAL1/NR1D1 ( B/N ) ratios were plotted against the time of death, resulting in clear peaks around 6:00 and 18:00, respectively (Fig. a and b), indicating that clock gene expression can be precisely detected even in dead bodies. Figure c and d show the mean values of the N/B and B/N ratio in the four-time domains (morning, 3:00–8:59, noon, 9:00–14:59, evening, 15:00–20:59 and night, 21:00–2:59). The N/B and B/N ratios were significantly higher in the morning and evening than in the other time domains, respectively, which confirms that these ratios are suitable parameters for estimating the time of death. However, in some autopsy cases, the N/B and B/N ratios exhibited very low values in the morning and evening, respectively (Fig. a and b), suggesting that some factors affected these parameters. Evaluation of the factors affecting the biological clock in the deceased We next examined the factors affecting the ratios in the deceased. First, we examined gender differences in the temporal pattern of the ratios (male, n = 224; female, n = 94). Both genders showed a similar temporal pattern of the N/B (Fig. a) and B/N (Fig. b) ratios. The N /B (Fig. c) and B/N (Fig. d) ratios in deceased males were significantly higher in the morning and evening, respectively, which was similar to the results of total cases (Fig. c and d). On the other hand, the N/B ratio in deceased females (Fig. c) was significantly higher in the morning, which is similar to the results in deceased males, whereas the B/N ratio was higher in the evening than in other time domains, but the difference was not statistically significant. (Fig. d). We divided the cases into three age groups (≤ 19 years, n = 13; 20–69 years, n = 200; ≥ 70 years, n = 105). All age groups showed similar temporal patterns (Fig. a–d). The N/B ratio in the morning was significantly higher than those in the three other time domains in the 20–69 and ≥ 70 years groups (Fig. c). The B/N ratio in the evening was higher than those in the three other time domains only in the 20–69 years group (Fig. d). In contrast, the temporal pattern of the N/B ratio in the morning (3:00–8:59) and that of the B/N ratio in the evening (15:00–20:59) did not significantly differ from those in other time domains in the ≤ 19 years group (Fig. c and d). The N/B ratio in the morning and the B/N ratio in the evening were plotted against age; the results showed that the N/B and B/N ratios are independent of age (Fig. e and f). However, the case number in young and high-age groups was small. Therefore, more cases must be used for statistical analysis of these groups. Finally, we examined the effect of post-mortem intervals on the ratios. We divided the cases into two groups, < 30 h postmortem interval (n = 250) and > 30 h postmortem interval (n = 68). The N/B and B/N ratios in both groups showed peaks in the morning and evening, respectively, indicating that the post-mortem interval had virtually no effect on them (Fig. a–f). However, there was no significant difference in the B/N ratio between the evening and noon time domains in the > 30 h post-mortem interval group (Fig. d). This is likely due to the small number of cases (n = 9) in the noon time domain of the > 30 h post-mortem interval group. The N/B ratio in the morning and the B/N ratio in the evening were plotted against the postmortem interval; the results indicated that the ratios are independent of the postmortem interval (Fig. e and f). Evaluation of the cause of death affecting the biological clock We next examined the differences in the temporal pattern of the N/B and B/N ratios between intrinsic (n = 73) and extrinsic (n = 245) death groups. As shown in Fig. a and b, there were no significant differences between the groups. In the extrinsic death cases, the N/B ratio in the morning and the B/N ratio in the evening were significantly higher than those in other time domains (Fig. c and d). However, in the intrinsic death cases, the N/B ratio in the morning was significantly higher than those in other time domains, but the B/N ratio in the evening did not significantly differ from those in other time domains (Fig. c and d). We also examined the effect of specific causes of death on the ratios. The most common causes of death (Table ), including hemorrhagic and traumatic shock, aortic rupture, drowning, burn, asphyxia, intoxication, and ischemic heart failure, except brain injury, did not seem to have a significant effect on the ratios (data not shown). Of note, brain injury, especially, chronic brain injury with cerebral edema, cerebral hernia, and cerebral hypoxia seemed to strongly affect the ratios in the hearts of the deceased. As shown in Fig. a and b, the morning peak of the N/B ratio and the evening peak of the B/N ratio did not take place in cases of delayed death due to chronic brain injury (n = 15), whereas the peaks of the N/B and B/N ratios were observed in acute death cases with severe brain injury (n = 35). The cases of delayed death due to chronic brain injury did not show an oscillation in the N/B and B/N ratios (Fig. c and d). The N/B ratio in the morning significantly differs from that in the evening in cases of acute death with severe brain injury (Fig. c). However, these findings are from a limited small number of cases, and the loss of oscillation of N/B and B/N ratios due to chronic brain injury needs to be confirmed in more cases. Applicability of our method to forensic practice Our method reads the biological clock in the deceased; however, there are only two-time domains (morning, around 6:00; evening, around 18:00) in the clock. The N/B ratio is suitable for reading at 6:00 and the B/N ratio is suitable for reading at 18:00. All cases where the N/B ratio was > 25 were deaths occurring from 1:00 to 10:00 (n = 40), and those where the ratio was > 40 were deaths occurring from 3:00 to 9:00 (n = 23) (Fig. a). On the other hand, all cases where the B/N ratio was > 1.5 were deaths occurring from 14:00 to 22:00 (n = 39), and those where the ratio was > 4 were deaths occurring from 15:00 to 20:00 (n = 11) (Fig. b). However, only 24.8% (79/318) of morning and evening deaths were predicted by our method, and low values of N/B and B/N ratios do not exclude morning and evening deaths. Therefore, although this method is not effective in all cases, it is still important in forensic practice because it complements conventional methods from a completely different perspective.
The NR1D1/BMAL1 ( N/B ) and BMAL1/NR1D1 ( B/N ) ratios were plotted against the time of death, resulting in clear peaks around 6:00 and 18:00, respectively (Fig. a and b), indicating that clock gene expression can be precisely detected even in dead bodies. Figure c and d show the mean values of the N/B and B/N ratio in the four-time domains (morning, 3:00–8:59, noon, 9:00–14:59, evening, 15:00–20:59 and night, 21:00–2:59). The N/B and B/N ratios were significantly higher in the morning and evening than in the other time domains, respectively, which confirms that these ratios are suitable parameters for estimating the time of death. However, in some autopsy cases, the N/B and B/N ratios exhibited very low values in the morning and evening, respectively (Fig. a and b), suggesting that some factors affected these parameters.
We next examined the factors affecting the ratios in the deceased. First, we examined gender differences in the temporal pattern of the ratios (male, n = 224; female, n = 94). Both genders showed a similar temporal pattern of the N/B (Fig. a) and B/N (Fig. b) ratios. The N /B (Fig. c) and B/N (Fig. d) ratios in deceased males were significantly higher in the morning and evening, respectively, which was similar to the results of total cases (Fig. c and d). On the other hand, the N/B ratio in deceased females (Fig. c) was significantly higher in the morning, which is similar to the results in deceased males, whereas the B/N ratio was higher in the evening than in other time domains, but the difference was not statistically significant. (Fig. d). We divided the cases into three age groups (≤ 19 years, n = 13; 20–69 years, n = 200; ≥ 70 years, n = 105). All age groups showed similar temporal patterns (Fig. a–d). The N/B ratio in the morning was significantly higher than those in the three other time domains in the 20–69 and ≥ 70 years groups (Fig. c). The B/N ratio in the evening was higher than those in the three other time domains only in the 20–69 years group (Fig. d). In contrast, the temporal pattern of the N/B ratio in the morning (3:00–8:59) and that of the B/N ratio in the evening (15:00–20:59) did not significantly differ from those in other time domains in the ≤ 19 years group (Fig. c and d). The N/B ratio in the morning and the B/N ratio in the evening were plotted against age; the results showed that the N/B and B/N ratios are independent of age (Fig. e and f). However, the case number in young and high-age groups was small. Therefore, more cases must be used for statistical analysis of these groups. Finally, we examined the effect of post-mortem intervals on the ratios. We divided the cases into two groups, < 30 h postmortem interval (n = 250) and > 30 h postmortem interval (n = 68). The N/B and B/N ratios in both groups showed peaks in the morning and evening, respectively, indicating that the post-mortem interval had virtually no effect on them (Fig. a–f). However, there was no significant difference in the B/N ratio between the evening and noon time domains in the > 30 h post-mortem interval group (Fig. d). This is likely due to the small number of cases (n = 9) in the noon time domain of the > 30 h post-mortem interval group. The N/B ratio in the morning and the B/N ratio in the evening were plotted against the postmortem interval; the results indicated that the ratios are independent of the postmortem interval (Fig. e and f).
We next examined the differences in the temporal pattern of the N/B and B/N ratios between intrinsic (n = 73) and extrinsic (n = 245) death groups. As shown in Fig. a and b, there were no significant differences between the groups. In the extrinsic death cases, the N/B ratio in the morning and the B/N ratio in the evening were significantly higher than those in other time domains (Fig. c and d). However, in the intrinsic death cases, the N/B ratio in the morning was significantly higher than those in other time domains, but the B/N ratio in the evening did not significantly differ from those in other time domains (Fig. c and d). We also examined the effect of specific causes of death on the ratios. The most common causes of death (Table ), including hemorrhagic and traumatic shock, aortic rupture, drowning, burn, asphyxia, intoxication, and ischemic heart failure, except brain injury, did not seem to have a significant effect on the ratios (data not shown). Of note, brain injury, especially, chronic brain injury with cerebral edema, cerebral hernia, and cerebral hypoxia seemed to strongly affect the ratios in the hearts of the deceased. As shown in Fig. a and b, the morning peak of the N/B ratio and the evening peak of the B/N ratio did not take place in cases of delayed death due to chronic brain injury (n = 15), whereas the peaks of the N/B and B/N ratios were observed in acute death cases with severe brain injury (n = 35). The cases of delayed death due to chronic brain injury did not show an oscillation in the N/B and B/N ratios (Fig. c and d). The N/B ratio in the morning significantly differs from that in the evening in cases of acute death with severe brain injury (Fig. c). However, these findings are from a limited small number of cases, and the loss of oscillation of N/B and B/N ratios due to chronic brain injury needs to be confirmed in more cases.
Our method reads the biological clock in the deceased; however, there are only two-time domains (morning, around 6:00; evening, around 18:00) in the clock. The N/B ratio is suitable for reading at 6:00 and the B/N ratio is suitable for reading at 18:00. All cases where the N/B ratio was > 25 were deaths occurring from 1:00 to 10:00 (n = 40), and those where the ratio was > 40 were deaths occurring from 3:00 to 9:00 (n = 23) (Fig. a). On the other hand, all cases where the B/N ratio was > 1.5 were deaths occurring from 14:00 to 22:00 (n = 39), and those where the ratio was > 4 were deaths occurring from 15:00 to 20:00 (n = 11) (Fig. b). However, only 24.8% (79/318) of morning and evening deaths were predicted by our method, and low values of N/B and B/N ratios do not exclude morning and evening deaths. Therefore, although this method is not effective in all cases, it is still important in forensic practice because it complements conventional methods from a completely different perspective.
To date, most methods for estimating the time of death estimate the time since death and are affected by internal, external, antemortem, and postmortem conditions. We hypothesized that the biological clock stops at the time of death, and developed a method to read this stopped biological clock . Therefore, our method estimates the time of death, not the time since death, and appears to be independent of environmental factors; however, it can be influenced by internal factors such as age, gender, cause of death, and lifestyle of the deceased. The reliability and limitations of the practical application of newly developed methods must be evaluated. Thus, we examined our method in increased number of cases with a defined time of death. The N/B ratio showed a peak around 6:00, indicating that the method can give a stable result. Furthermore, we examined a novel reverse parameter, the B/N ratio, which showed a peak around 18:00. The N/B ratio was high in the morning, while the B/N ratio was high in the evening; therefore, we can determine whether death occurred in the morning or evening with this method. However, low N/B and B/N values were often found in cases of death at around 6:00 and 18:00, respectively. Such irregular values were not seen in the animal experiments because mice had a uniform genetic background and were bred in a strictly controlled environment . Furthermore, all mice were sacrificed quickly by cervical dislocation under deep anesthesia. On the other hand, humans have different genetic backgrounds and live in various time patterns (e.g., shift workers), which might affect the expression pattern of biological clock genes , . In the present study, we demonstrated that gender, age, and postmortem interval (within 96 h after death) did not significantly affect the N/B and B/N ratios. However, the youngest (< 1 year old, n = 5), and oldest (> 90 years old, n = 14) cases as well as those with long postmortem intervals (> 48 h, n = 11) were examined in a limited number. It is known that circadian rhythms such as body temperature and nocturnal sleep onset appear within 60 days after birth . Moreover, the circadian oscillation of clock gene expression in the SNC (suprachiasmatic nucleus) and some peripheral tissues has been confirmed in nonhuman primate fetuses , suggesting that clock gene expression in the heart of human infants may also show circadian oscillation. Therefore, the biological clock-based estimation of the time of death seems to be applicable to infant cases. However, maternal melatonin affects clock gene expression in nonhuman primate fetuses , indicating that the breastfeeding pattern might affect the circadian clock in infants. Therefore, differences in clock gene expression patterns between the infant's and adult's heart may be found in future research. On the other hand, it has been reported that aging significantly affects the circadian pattern of gene expression in the human prefrontal cortex, which might bring about changes in the circadian rhythm in old age . Different circadian rhythms in older individuals, especially the feeding pattern, can affect biological clock gene expression , . Since the biological clock in the peripheral tissues is also under adrenergic control , age-related changes in the beta-adrenergic neuroeffector system might alter the clock gene expression pattern in the heart of older adults . Based on the above-mentioned facts, our method should be applied carefully to infants and older adults. Longer postmortem intervals might cause RNA deterioration , which increases the uncertainty of the results. Since the number of cases in children, the elderly, and cases with a long postmortem interval is small, a study using an increased number of cases is necessary for a statistically meaningful discussion. The cause of death seemed to affect the N/B and B/N ratios. However, there were no significant differences in the temporal patterns between intrinsic and extrinsic death cases. Moreover, most causes of death did not significantly affect the ratios. Exceptionally, the peaks of both ratios almost disappeared in the cases of death with cerebral edema, cerebral hernia, or cerebral hypoxia. We also found an alteration of the N/B ratio in the iliopsoas muscle tissue of cases with chronic brain injury (not shown), suggesting that chronic brain injury-induced SCN damage brings about a systemic alteration of peripheral clock gene expression. Disturbances in circadian rhythms due to brain trauma have been reported , , . Recently, traumatic brain injury-induced alteration of clock gene expression in the SCN and hippocampus was reported in a rat model . Our preliminary result in mouse model of water intoxication showed that cerebral edema induced alteration of biological clock in the heart ( ). Therefore, biological clock-based estimation of the time of death should be applied with caution to cases of severe brain injury or intrinsic death with diseases affecting brain function such as severe hepatic encephalopathy. We analyzed 318 cases in the present study. However, there was bias in the number of cases with regards to gender, age, cause of death, and other factors. The number of cases in some groups, such as females, was less than 100, and some of these cases did not show statistical significance in the N/B and B/N ratios between morning and evening time domains compared to other time domains. Therefore, our method should be further validated with studies using a larger number of cases. Multifacility research may be necessary to conduct an analysis with a sufficient number of cases. Recently, an analysis of human transcriptional rhythms using a cyclic ordering algorithm called Cyclops was reported . The Cyclops algorithm enables the estimation of the circadian phase of a sample from high-throughput data that lack temporal information, and is expected to be an innovative approach to estimating the time of death in forensic practice. As Cyclops is an algorithm for the temporary reconstruction of population-based human organ data, its usefulness as a method for estimating the time of death for individual autopsy samples in forensic practice is uncertain. The usefulness and problems of Cyclops will be clarified by verifying it in forensic practice. Another problem is that high-throughput analysis is currently expensive for forensics. In the present study, our method was able to predict only 79 cases of morning or evening deaths out of a total of 318 cases (about 25%). This indicates that our method only works in limited cases. However, all classical methods for estimating time of death have uncertainties, and are based on postmortem changes that begin at death and are influenced by various environmental factors. In contrast, our method directly estimates the death time based on the circadian clock, which stops at death and is unaffected by factors that influence postmortem changes. For example, after a deceased person's body temperature reaches ambient temperature, it is difficult to estimate time since death based on body temperature. In the case of burn death, many classical estimation methods, such as body temperature, corneal opacity and rigor mortis cannot be used. Therefore, all classical estimation methods have limitations in their applicability. Our method complements conventional methods from a completely different perspective and can be used where conventional methods are not applicable. In conclusion, our method makes it possible to estimate the morning and evening deaths by reading the N/B and B/N ratios in the heart of the deceased, regardless of gender, age, postmortem interval, and cause of death. Although the N/B and B/N ratios cannot exclude the possibility of death occurring in the morning or evening, our method is still valuable in forensic practice because it can complement the classical methods that are dependent on postmortem changes. However, since severe brain injury profoundly affects the peripheral circadian clock, our method may not apply to cases of severe brain injury. Additionally, the applicability of the method to infants and older adults needs to be evaluated in more cases.
Autopsy samples Heart samples were obtained from 318 forensic autopsy cases with known times of death (224 men and 94 women). The age of autopsied subjects ranged from 2 months to 97 years (average: 58.7 years), and postmortem intervals in all cases were less than 96 h (average: 22.3 h). The causes of death of the subjects were shown in Table . Tissue samples were taken during autopsy, immediately frozen in liquid nitrogen and stored at − 80 °C until use. Clock gene expression is routinely analyzed in all autopsy cases at our Institute as part of the process for estimating the time of death. Extraction of total RNA and real-time RT-PCR Total RNA was extracted from tissue samples (about 100 mg) and applied to Maxwell System with Maxwell RSC simplyRNA Tissue Kit (Promega Corporation, Madison, WI) according to the manufacturer’s instructions. Then 1 μg of total RNA was reverse-transcribed into cDNA by using a PrimeScript RT reagent Kit (TAKARA BIO INC., Otsu, Japan) with six random primers (TAKARA BIO INC.). Thereafter, generated cDNA was subjected to qPCR analysis using a SYBR ® Premix Ex Taq ™ II kit (TAKARA BIO INC.) with specific primer sets (Table ). Amplification and detection of mRNA were performed using Thermal Cycler Dice ® Real Time System (TP800, TAKARA BIO INC). Statistical analysis Data were expressed as the mean ± standard error of the mean. Unpaired Student t -test and Scheffe’s F test were performed to compare the values between two groups and for multiple comparisons, respectively. Statistical significance was set at p < 0.05. Ethical approval Our study was approved by the Research Ethics Committee of Wakayama Medical University (No. 3177). All procedures were carried out in accordance with the principles of the Declaration of Helsinki. In addition, this study was conducted using past autopsy records and heart tissues; we were unable to obtain informed consent from the bereaved family for the use of the records and the heart tissues. In accordance with the "Ethical Guidelines for Medical Research Involving Human Subjects (enacted by the Ministry of Health, Labor and Welfare in Japan)," Sect. 12–1 (2) (a) (c), the review board of the Research Ethics Committee of Wakayama Medical University waived the need for written informed consent from relatives of the individuals studied because this was a de-identified retrospective study of archived autopsy-derived tissues.
Heart samples were obtained from 318 forensic autopsy cases with known times of death (224 men and 94 women). The age of autopsied subjects ranged from 2 months to 97 years (average: 58.7 years), and postmortem intervals in all cases were less than 96 h (average: 22.3 h). The causes of death of the subjects were shown in Table . Tissue samples were taken during autopsy, immediately frozen in liquid nitrogen and stored at − 80 °C until use. Clock gene expression is routinely analyzed in all autopsy cases at our Institute as part of the process for estimating the time of death.
Total RNA was extracted from tissue samples (about 100 mg) and applied to Maxwell System with Maxwell RSC simplyRNA Tissue Kit (Promega Corporation, Madison, WI) according to the manufacturer’s instructions. Then 1 μg of total RNA was reverse-transcribed into cDNA by using a PrimeScript RT reagent Kit (TAKARA BIO INC., Otsu, Japan) with six random primers (TAKARA BIO INC.). Thereafter, generated cDNA was subjected to qPCR analysis using a SYBR ® Premix Ex Taq ™ II kit (TAKARA BIO INC.) with specific primer sets (Table ). Amplification and detection of mRNA were performed using Thermal Cycler Dice ® Real Time System (TP800, TAKARA BIO INC).
Data were expressed as the mean ± standard error of the mean. Unpaired Student t -test and Scheffe’s F test were performed to compare the values between two groups and for multiple comparisons, respectively. Statistical significance was set at p < 0.05.
Our study was approved by the Research Ethics Committee of Wakayama Medical University (No. 3177). All procedures were carried out in accordance with the principles of the Declaration of Helsinki. In addition, this study was conducted using past autopsy records and heart tissues; we were unable to obtain informed consent from the bereaved family for the use of the records and the heart tissues. In accordance with the "Ethical Guidelines for Medical Research Involving Human Subjects (enacted by the Ministry of Health, Labor and Welfare in Japan)," Sect. 12–1 (2) (a) (c), the review board of the Research Ethics Committee of Wakayama Medical University waived the need for written informed consent from relatives of the individuals studied because this was a de-identified retrospective study of archived autopsy-derived tissues.
Supplementary Information.
|
“M1/M2” Muscularis Macrophages Are Associated with Reduction of Interstitial Cells of Cajal and Glial Cells in Achalasia
|
744ee1ca-a0c1-4f23-93b1-de0058ca96f7
|
10102055
|
Anatomy[mh]
|
Achalasia is an esophageal motility disorder characterized by absence of peristalsis and impaired relaxation of the lower esophageal sphincter (LES) leading to symptoms including dysphagia, chest pain, reflux, and weight loss . This motility disorder is associated with functional loss or reduction of myenteric plexus neurons, interstitial cells of Cajal (ICC), neuronal nitric oxide synthase (nNOS) mainly in the distal esophagus and LES . Except for secondary achalasia due to Chagas disease, the etiology of idiopathic achalasia remains largely unknown. Many studies indicated aberrant autoimmunity probably triggered by a viral infection may play an important part in neuronal degeneration of achalasia, particularly in genetically susceptible individuals . Patients with achalasia are more likely to suffer from autoimmune diseases and specific autoantibodies associated with neuronal damage can be found in their sera . Moreover, infiltration in LES muscle by inflammatory cells including eosinophils, mast cells, and T cells was associated with loss of neuronal degeneration. Macrophages are essential for innate immunity and play a critical role in inflammation and host defense . Macrophages exhibit remarkable plasticity and can differentiate into classically activated macrophages (M1) or alternatively activated macrophages (M2), the type of activation depending on different microenvironments . The M1 phenotype is pro‐inflammatory and expresses high levels of proinflammatory cytokines including interleukin‐6 (IL‐6), tumor necrosis factor‐alpha (TNF‐α), and inducible nitric oxide synthase (iNOS). In contrast, the M2 phenotype is anti-inflammatory, with high expression of galactose-type and the mannose receptor . Macrophages in gastrointestinal (GI) can be divided into mucosal macrophages and muscularis macrophages (MM φ ) according to their resident location . Several studies showed MM φ are associated with GI motility disorders such as postoperative ileus, intestinal ischemia–reperfusion (I/R) damage and gastroparesis . Activation status of MMφ can influence the function of smooth muscle, ICC (GI pacemaker cells), and neurons . However, there is no study investigating the relationship between MMφ and esophageal motility diseases such as achalasia. Therefore, the purpose of this study was to preliminary explore the association between MMφ and achalasia and to investigate the correlation of M1 and M2 phenotype with ICC, nitrergic nerves, glial cells, as well as clinical characteristics.
Patient Selection The study included 27 patients diagnosed with achalasia undergoing peroral endoscopic myotomy (POEM) at our center from July 2020 to May 2021. Patients were excluded from this study based on the following criteria: (1) presence of esophagus disease that might interfere with the result of our study, such as Barrett’s esophageal lesions, esophageal stricture, esophageal varices, active esophagitis; (2) patients with serious underlying diseases such as liver cirrhosis, hematological disease, or coagulopathy; (3) pregnant and breastfeeding female patients; (4) patients with surgical contraindications for POEM. All the 27 patients were received comprehensive preoperative clinical evaluation including Eckardt Score for symptoms of achalasia, high-resolution esophageal manometry (HREM), barium esophagogram and esophagogastroduodenoscopy (EGD). The Chicago Classification Criteria of esophageal motility disorders v3.0 was utilized for diagnosis and classification of achalasia .Clinical information was obtained by face-to-face inquiry, including age, gender, body mass index (BMI), history of tobacco and alcohol intake, history of prior treatments, disease duration. The study was approved by the Medical Ethics Committee of the First Affiliated Hospital of Nanjing Medical University (2020-SR-380). Written informed consent was obtained from all the participants in our study. Tissue Samples As reported in previous studies , tissue samples of muscle were obtained from the LES high‐pressure zone after myotomy during the POEM procedure. The 2–3 pieces of tissue were collected from each patient, and each of the tissue samples was generally 0.3 cm × 0.3 cm × 0.3 cm in size. Esophageal biopsy was performed at the same site from 10 patients who underwent surgery for esophageal or stomach neoplasms without invasion of the cardia at our center. The exclusive criteria were consistent with the above and none of the 10 patients had esophageal motility disorder including achalasia or autoimmune disease which might interfere with the result of our study. This method for obtaining control specimens was in accordance with previous studies . All the esophageal biopsies were conducted by the same endoscopist and surgeon to ensure the sampling sites were consistent and to avoid sampling error. The biopsy specimens were immediately immersed in 10% formalin and then embedded in paraffin. Immunohistochemistry and Quantification Consistent with previous studies of gastrointestinal motility , antibodies to nNOS were used to examine nitrergic nerves and to S-100β to assess glial cells. The immunohistochemical staining for c‐kit was performed to assess the ICC networks . The CD68 was used as a general marker for macrophages, and for the identification of macrophages phenotype, antibodies to inducible nitric oxide synthase (iNOS) were used to identify M1 macrophages and to CD206 to M2 macrophages . The tissue samples were embedded in paraffin, sliced into 4‐μm sections, and mounted on slides. Following deparaffinization, rehydration, antigen retrieval and blocking of endogenous peroxidase, immunohistochemical staining was conducted using anti-CD68 antibody (ab955, Abcam, 1:3000 dilution), anti-iNOS antibody (abs131793, absin, Shanghai China, 1:200 dilution), anti-CD206 (ab64693, Abcam, 1:10,000 dilution), anti-c-kit antibody (ab32363, Abcam, 1:400 dilution), anti-nNOS antibody (ab5586, Abcam, 1:100 dilution), and anti S-100β antibody (ab52642, Abcam, 1:400 dilution). The immunohistochemical staining intensities of the interested proteins were quantified as integrated optical densities (IODs) utilizing Image-Pro Plus 6.0 software (Media Cybernetics, MD, USA). The researcher who conducted the quantification did not know which study group the sample was in to avoid bias. Follow-Up and Outcome Measures Patients were regularly followed-up at 1, 3, 6, and 12 months by clinical interview. The Eckardt Score was performed for clinical assessment. HRM and barium esophagram were performed 3 months after POEM, and EGD 12 months after POEM. Eckardt Score > 3 after the operation was defined as clinical failure . The last follow-up was in February 2022. Statistical Analysis Mean ± standard deviation (SD) or range was used to represent continuous variables, and the number of cases (ratio %) was utilized to express categorical variables. Differences between groups were compared using a two-sample independent t test for continuous variables and chi‐square test for categorical variables. Correlation between variables was evaluated using Spearman’s correlation or Pearson correlation analysis. SPSS 22.0 and GraphPad Prism 9 were used for data processing and statistical analysis. The P value < 0.05 was considered to be statistically significant.
The study included 27 patients diagnosed with achalasia undergoing peroral endoscopic myotomy (POEM) at our center from July 2020 to May 2021. Patients were excluded from this study based on the following criteria: (1) presence of esophagus disease that might interfere with the result of our study, such as Barrett’s esophageal lesions, esophageal stricture, esophageal varices, active esophagitis; (2) patients with serious underlying diseases such as liver cirrhosis, hematological disease, or coagulopathy; (3) pregnant and breastfeeding female patients; (4) patients with surgical contraindications for POEM. All the 27 patients were received comprehensive preoperative clinical evaluation including Eckardt Score for symptoms of achalasia, high-resolution esophageal manometry (HREM), barium esophagogram and esophagogastroduodenoscopy (EGD). The Chicago Classification Criteria of esophageal motility disorders v3.0 was utilized for diagnosis and classification of achalasia .Clinical information was obtained by face-to-face inquiry, including age, gender, body mass index (BMI), history of tobacco and alcohol intake, history of prior treatments, disease duration. The study was approved by the Medical Ethics Committee of the First Affiliated Hospital of Nanjing Medical University (2020-SR-380). Written informed consent was obtained from all the participants in our study.
As reported in previous studies , tissue samples of muscle were obtained from the LES high‐pressure zone after myotomy during the POEM procedure. The 2–3 pieces of tissue were collected from each patient, and each of the tissue samples was generally 0.3 cm × 0.3 cm × 0.3 cm in size. Esophageal biopsy was performed at the same site from 10 patients who underwent surgery for esophageal or stomach neoplasms without invasion of the cardia at our center. The exclusive criteria were consistent with the above and none of the 10 patients had esophageal motility disorder including achalasia or autoimmune disease which might interfere with the result of our study. This method for obtaining control specimens was in accordance with previous studies . All the esophageal biopsies were conducted by the same endoscopist and surgeon to ensure the sampling sites were consistent and to avoid sampling error. The biopsy specimens were immediately immersed in 10% formalin and then embedded in paraffin.
Consistent with previous studies of gastrointestinal motility , antibodies to nNOS were used to examine nitrergic nerves and to S-100β to assess glial cells. The immunohistochemical staining for c‐kit was performed to assess the ICC networks . The CD68 was used as a general marker for macrophages, and for the identification of macrophages phenotype, antibodies to inducible nitric oxide synthase (iNOS) were used to identify M1 macrophages and to CD206 to M2 macrophages . The tissue samples were embedded in paraffin, sliced into 4‐μm sections, and mounted on slides. Following deparaffinization, rehydration, antigen retrieval and blocking of endogenous peroxidase, immunohistochemical staining was conducted using anti-CD68 antibody (ab955, Abcam, 1:3000 dilution), anti-iNOS antibody (abs131793, absin, Shanghai China, 1:200 dilution), anti-CD206 (ab64693, Abcam, 1:10,000 dilution), anti-c-kit antibody (ab32363, Abcam, 1:400 dilution), anti-nNOS antibody (ab5586, Abcam, 1:100 dilution), and anti S-100β antibody (ab52642, Abcam, 1:400 dilution). The immunohistochemical staining intensities of the interested proteins were quantified as integrated optical densities (IODs) utilizing Image-Pro Plus 6.0 software (Media Cybernetics, MD, USA). The researcher who conducted the quantification did not know which study group the sample was in to avoid bias.
Patients were regularly followed-up at 1, 3, 6, and 12 months by clinical interview. The Eckardt Score was performed for clinical assessment. HRM and barium esophagram were performed 3 months after POEM, and EGD 12 months after POEM. Eckardt Score > 3 after the operation was defined as clinical failure . The last follow-up was in February 2022.
Mean ± standard deviation (SD) or range was used to represent continuous variables, and the number of cases (ratio %) was utilized to express categorical variables. Differences between groups were compared using a two-sample independent t test for continuous variables and chi‐square test for categorical variables. Correlation between variables was evaluated using Spearman’s correlation or Pearson correlation analysis. SPSS 22.0 and GraphPad Prism 9 were used for data processing and statistical analysis. The P value < 0.05 was considered to be statistically significant.
Demographic and Clinical Characteristics of all Subjects The demographic characteristics of 27 patients with achalasia and 10 controls in this study are presented in Table . The mean age of patients with achalasia was younger compared to the control group (42.89 years vs 52.10 years, P = 0.003). There was no significant difference between two groups in terms of gender ( P = 0.276), BMI ( P = 0.074), history of alcohol ( P = 1) or tobacco intake ( P = 0.069). None of the participants had a definite history of herpes simplex virus (HSV), varicella zoster virus (VZV), measles and human papilloma virus (HPV) infection, and autoimmune disease. As shown in Table , the median disease duration of patients with achalasia was 4 years (range 0.5‐20 years), and the median preoperative Eckardt score was 7 (range 6‐9). There were two patients with a history of prior treatments, including one patient with prior balloon dilation and esophageal stent, and the other with prior POEM. All patients with achalasia were received HREM, including 4 with type I and 20 with type II. The remaining 3 patients were unspecified subtypes due to the failure of catheter sensors to pass through EGJ (esophagogastric junction) because of strong tortuous angulation of the esophagus. The POEM was performed successfully in all 27 patients. The Eckardt score at 6 months after POEM was all less than 3 and was significantly decreased compared with pre-operation ( P < 0.001). Histological Evaluation of Tissue Samples The results of immunoreactivity and quantification for ICC, glial cells, and nNOS was shown in Fig. . Interestingly, the staining for c‐kit showed significantly fewer ICC in patients with achalasia compared with controls ( P = 0.018). However, no significant difference was found in glial cells and nNOS between two groups ( P = 0.138 and 0.661, respectively). Figure A showed no considerable difference between two groups in neither total macrophages nor M2 macrophages ( P = 0.621 and 0.539, respectively). In contrast, the level of M1 in patients with achalasia was found higher than that in controls no matter in terms of the absolute number or the proportion of M1 in the total macrophages ( P = 0.026 for M1 and P = 0.037 for M1/MMφ; Fig. A, ). In addition, statistical differences were also found between two groups in terms of proportion of M2 in the total macrophages and ratio of M1 to M2 ( P = 0.048 for M2/MMφ and P < 0.001 for M1/M2; Fig. C, ). The histological differences between different types of achalasia were also analyzed, but no statistical difference was found in terms of the levels of nNOS, ICC, glial cells, M1/MMφ, M2/MMφ, and M1/M2. Correlation Analysis of Histological and Clinical Characteristics in Achalasia For the correlation analysis of histological characteristics, significant correlations were detected between levels of nNOS, ICC, and glial cells ( P = 0.026 for nNOS and ICC, P = 0.001 for nNOS and glial cells, P = 0.019 for ICC and glial cells, Fig. A–C). Moreover, as shown in F g. G–I, the results indicated that there were significant correlations between M1/MMφ and levels of ICC ( P = 0.016) and glial cells ( P = 0.020), but no clear relationship was found between M1/MMφ and nNOS ( P = 0.315) in patients with achalasia. Similar results were also found between M2/MMφ and levels of ICC ( P = 0.019), glial cells ( P = 0.004), and nNOS ( P = 0.135) (Fig. J–L). However, no significant correlations were found between M1/M2 and other histological characteristics (Fig. D–F). To further explore the role of macrophages in achalasia, correlation analysis between clinical characteristics and M1/M2, M1/MMφ, and M2/MMφ was conducted. However, none of them had a significant association with clinical characteristics including gender, age, BMI, disease duration, Chicago subtype, preoperative Eckardt score, IRP and decrease of Eckardt score after POEM.
The demographic characteristics of 27 patients with achalasia and 10 controls in this study are presented in Table . The mean age of patients with achalasia was younger compared to the control group (42.89 years vs 52.10 years, P = 0.003). There was no significant difference between two groups in terms of gender ( P = 0.276), BMI ( P = 0.074), history of alcohol ( P = 1) or tobacco intake ( P = 0.069). None of the participants had a definite history of herpes simplex virus (HSV), varicella zoster virus (VZV), measles and human papilloma virus (HPV) infection, and autoimmune disease. As shown in Table , the median disease duration of patients with achalasia was 4 years (range 0.5‐20 years), and the median preoperative Eckardt score was 7 (range 6‐9). There were two patients with a history of prior treatments, including one patient with prior balloon dilation and esophageal stent, and the other with prior POEM. All patients with achalasia were received HREM, including 4 with type I and 20 with type II. The remaining 3 patients were unspecified subtypes due to the failure of catheter sensors to pass through EGJ (esophagogastric junction) because of strong tortuous angulation of the esophagus. The POEM was performed successfully in all 27 patients. The Eckardt score at 6 months after POEM was all less than 3 and was significantly decreased compared with pre-operation ( P < 0.001).
The results of immunoreactivity and quantification for ICC, glial cells, and nNOS was shown in Fig. . Interestingly, the staining for c‐kit showed significantly fewer ICC in patients with achalasia compared with controls ( P = 0.018). However, no significant difference was found in glial cells and nNOS between two groups ( P = 0.138 and 0.661, respectively). Figure A showed no considerable difference between two groups in neither total macrophages nor M2 macrophages ( P = 0.621 and 0.539, respectively). In contrast, the level of M1 in patients with achalasia was found higher than that in controls no matter in terms of the absolute number or the proportion of M1 in the total macrophages ( P = 0.026 for M1 and P = 0.037 for M1/MMφ; Fig. A, ). In addition, statistical differences were also found between two groups in terms of proportion of M2 in the total macrophages and ratio of M1 to M2 ( P = 0.048 for M2/MMφ and P < 0.001 for M1/M2; Fig. C, ). The histological differences between different types of achalasia were also analyzed, but no statistical difference was found in terms of the levels of nNOS, ICC, glial cells, M1/MMφ, M2/MMφ, and M1/M2.
For the correlation analysis of histological characteristics, significant correlations were detected between levels of nNOS, ICC, and glial cells ( P = 0.026 for nNOS and ICC, P = 0.001 for nNOS and glial cells, P = 0.019 for ICC and glial cells, Fig. A–C). Moreover, as shown in F g. G–I, the results indicated that there were significant correlations between M1/MMφ and levels of ICC ( P = 0.016) and glial cells ( P = 0.020), but no clear relationship was found between M1/MMφ and nNOS ( P = 0.315) in patients with achalasia. Similar results were also found between M2/MMφ and levels of ICC ( P = 0.019), glial cells ( P = 0.004), and nNOS ( P = 0.135) (Fig. J–L). However, no significant correlations were found between M1/M2 and other histological characteristics (Fig. D–F). To further explore the role of macrophages in achalasia, correlation analysis between clinical characteristics and M1/M2, M1/MMφ, and M2/MMφ was conducted. However, none of them had a significant association with clinical characteristics including gender, age, BMI, disease duration, Chicago subtype, preoperative Eckardt score, IRP and decrease of Eckardt score after POEM.
Although it has been 340 years since achalasia was first described in 1674 by Sir Thomas Willis , the etiology of achalasia remains unclear. Many studies indicated that autoimmune-mediated inflammation may be the main cause of achalasia . A matched case–control study which enrolled 6769 cases and 27,076 controls, found that the presence of autoimmune conditions and viral infections was also associated with an increased risk of achalasia . The prevalence of autoimmune diseases in patients with achalasia is 7.42% (502/6,769) versus 4.02% (1,088/27,076) in controls, and the prevalence of viral infections is 1.58% (107/6,769) versus 0.82% (221/27,076). This result was in accordance with other previous studies and suggested that aberrant autoimmune and viral infections may contribute to the occurrence of achalasia. In our study, we collected clinical information from 27 patients with achalasia and 10 controls, but none of them had a definite history of HSV, VZV, HPV, and measles infection, or autoimmune diseases. This result may be attributed to the small number of cases in our study, and some of the patients with viral infections or autoimmune diseases may be excluded due to the surgical contraindications and other exclusive criteria as mentioned above. ICC are recognized as pacemaker cells and generate spontaneous electrical slow waves regulating gastrointestinal motility, and they are also associated with the transfer of neurotransmitters . The nNOS is a kind of inhibitory neurotransmitter, which help produce NO in nervous tissue to regulate muscle relaxation . Previous studies showed that the main pathological feature of achalasia is characterized by the decrease of esophageal myenteric plexus neurons, ICC and nNOS in the LES . This pathological change leads to the aperistalsis and impaired relaxation of the LES. In this study, although a significant difference was found only in ICC, LES of patients with achalasia displayed fewer glial cells and nNOS than controls. For the correlation analysis, significant correlations were detected between levels of nNOS, ICC, and glial cells. A study conducted by Ward et al. indicated that ICC may be the effectors that transduce NO signals into hyperpolarizing responses, and loss of ICC may impair relaxations and normal motility of LES . Another study also suggested that reduction of nNOS release might underlie the profound ICC impairment, which could impair the relaxation of LES in patients with achalasia. However, no clear correlation between the reduction degree of ICC and that of nNOS was found in other studies . Moreover, an animal study also found that the reduction of ICC and nNOS can cause dysfunction of the LES and esophageal peristalsis, but they might be independent relevant causes . Enteric glial cells are thought to function as intermediaries in enteric neurotransmission, thus reduction of them might weaken the neurenteric balance . Although published studies found that reduction of ICC and glial cells are present in gastrointestinal motor abnormalities such as slow transit constipation , colonic diverticular disease and achalasia , to our knowledge, no study has reported a clear correlation between levels of ICC and glial cells in these diseases. We speculated that the causes of reduction of nNOS, ICC, and glial cells were consistent, and the levels of them reflected the degree of the damage, so there were correlations among them. Overall, it is not very clear the role of and relationships between ICC, nNOS, and glial cells in achalasia, further studies are still needed to be conducted. MMφ were first described as a macrophage subtype residing in the myenteric plexus associated with both ICC and enteric neurons in the early 90 s . Although insight into the function of MMφ in gastrointestinal is still limited, several studies indicated a key role in regulation of gastrointestinal motility in both pathological and physiological conditions. Previous reports showed that M1-like macrophages could impair smooth muscle function and impair gut motility by producing pro-inflammatory cytokines such as IL-6 both in postoperative ileus and intestinal I/R injury . This motility disorder was also associated with functional changes of ICC networks which was most likely caused by the inflammatory process . Moreover, inhibition of TNF-α derived from M1 macrophage or suppressing M1 macrophage activation by blockade of IL-17A could alleviate the injury of the ICC . Emerging evidence about diabetic gastroparesis has revealed that heme oxygenase 1 (HO1) which is expressed by M2 macrophages, could protect ICC and nNOS expression in enteric neurons from oxidative damage associated with diabetes and prevent development of delayed gastric emptying . In contrast, activation of M1 macrophages which lack HO1 could cause significant damage to ICC and enteric neurons and influence gastric motility . Furthermore, reduction of CD206-positive cells was found in full thickness biopsies of the gastric body from patients with diabetic gastroparesis, which was associated with loss of ICC . In addition to regulation of inflammation to influence the gastrointestinal motility, MMφ could also regulate peristaltic activity of the colon in the steady state through the secretion of bone morphogenetic protein 2 (BMP2), which activates BMP receptor (BMPR) expressed by enteric neurons . In a previous work, Luo et al. demonstrated that MMφ could directly affect the function of intestinal smooth muscle cells by expressing the transient receptor potential vanilloid 4 (TRPV4) channel, without the enteric nervous system involvement . Nevertheless, no evidence suggested these mechanisms are associated with the etiology of achalasia. In our study, we found patients with achalasia had a higher level of M1/MMφ and a lower level of M2/MMφ in their LES than controls. Moreover, significant positive correlations were detected between M2/MM φ and numbers of ICC and glial cells. These results indicated that M1 macrophages might underlie the reduction of ICCs and glial cells due to their pro-inflammatory functions, and M2 macrophages might play a protective role against injury. Since the median disease duration of patients with achalasia in this study was 4 years and no significant correlation was found between disease duration and levels of M1/M2, M1/MMφ, and M2/MMφ, we thought that the inflammatory injury to ICC and glial cells might be persistent. However, positive correlations were found between M1/MM φ and numbers of ICC and glial cells, which were contrary to our expectations. These results were difficult to explain, we hypothesized that M1 macrophages can impair ICC and glial cells through secreting pro-inflammatory mediators, and measuring the number of M1 macrophages without detecting pro-inflammatory cytokines and chemokines they produced could not fully reflect the intensity of inflammation in LES of achalasia patients. In addition, other inflammatory cells such as mast cells and eosinophils also affected the degree of injury to cells related to GI motility, which were not included in the study. As we know, this is the first pilot study investigating the relationship between macrophages and achalasia, but it had several limitations. Firstly, the number of patients with achalasia was limited due to the low incidence. This may reduce the efficiency of detecting positive results during statistical analysis and hinder subgroup analysis among different types of achalasia. Secondly, the controls are patients who underwent surgery for esophageal or stomach neoplasms instead of age-gender matched healthy people, and tissue samples are obtained in different ways between the two groups. Although this method has been widely used in previous studies , it also had inherent biases. In addition, tissue samples obtained from the procedure of POEM were small compared to controls, and a multi-site biopsy of the esophagus is needed, which could reflect the pathological features of the whole esophagus. Finally, Western blot and fluorescence-activated cell sorting (FACS) for quantification were not performed. In our future study, we will expand the sample size, take related inflammatory cytokines and chemokines into analysis, and explore mechanisms underlying the role of MMφ in achalasia. In summary, the main finding of this study showed patients with achalasia had a higher level of M1/M2 ratio in LES and significant correlations were found between M2/MMφ and numbers of ICC and glial cells, which suggested that MMφ were probably associated with occurrence and development of achalasia. Further research should be undertaken to explore the mechanisms underlying the role of MMφ in achalasia and to illuminate the relationship between MMφ and nNOS, ICC and glial cells.
|
Biomechanical Analysis of
|
b2961dec-9955-41e5-a91f-7260a69c3d77
|
10102316
|
Suturing[mh]
|
Lesions of the long head tendon of the biceps brachii (LHBT) are the common cause of anterior shoulder pain and flexion dysfunction, which seriously affecting patients' quality of life. Surgical treatment is often required for LHBT lesions with conservative treatment failure. Among them, LHBT tenotomy and tenodesis are the common surgical treatment methods. Tenotomy has the advantage of early functional exercise without immobilization, but there are complications of upper arm Popeye deformity (incidence rate of 10% to 58%). Tenodesis can restore the anatomical length‐tension relationship of the biceps muscle, maintain its normal contour, and effectively reduce the incidence of complications such as upper arm Popeye deformity. For LHBT tenodesis, due to the lack of consensus on the specific tenodesis methods, various research focused on the continuous improvement and development of existing tenodesis methods to obtain better biomechanical properties. As a result, many techniques have been introduced clinically, including open, mini‐open, and arthroscopic patterns. , , Arthroscopic suprapectoral tenodesis at intertubercular groove tenodesis does not require additional surgical incisions, with less interference to the muscles and a similar postoperative pain relief effect compared to open surgery. Therefore, it is widely accepted by doctors and patients. , However, compared with open tenodesis, the upper arm Popeye deformity incidence is still higher with arthroscopic tenodesis. In arthroscopic biceps tenodesis, the choice of suture technique is limited due to the space constraint in the joint. The simple stitch technique and the Lasso Loop technique with stronger tissue grasping ability are commonly utilized. The Lasso‐Loop suture, designed by Lafosse et al . in 2006, aims to improve tissue grip. Patzer et al had proved that the Lasso‐Loop techniques achieved strong and secure tenodesis, which was equivalent to interference screws in LHBT tenodesis. But Kaback et al . found that compared to the Krackow suture technique in human cadaveric LHBT tenodesis, the Lasso‐loop suture technique had showed significantly worse mean failure load and mean stiffness values. In addition, this suture technique cannot prevent longitudinally sawing tendons, which may eventually result in suture pull‐out and tenodesis failure and may be one of the main reasons for the still high incidence of upper arm Popeye deformity. Although the classic Krackow suture technique shows superior biomechanical properties for biceps tendon fixation, it is not easy to complete under arthroscopy. Therefore, secure tendon grasping ability of the suture techniques is critical for successful arthroscopic tenodesis in order to obtain a lower incidence of Popeye deformity of the upper arm with endoscopic fixation. The Lark‐Loop stitch, newly presented in 2022 by Zhou et al . constructs a Lark's head knot, holding tendon tissue with the two suture ends piercing through the middle portion of the tendon. When tension is applied to the ends of the two suture, the Lark's head knot self‐tightening provides good tendon grip. In the meantime, the knot acts as a rip‐stop effect, restricting the suture sawing tendon and overcoming the failure of tenodesis. This suture structure is fully arthroscopically operable and tear‐resistant, allowing quick, easy, and safe tendon grasping. It is now used in arthroscopic proximal long‐head tenodesis. However, the biomechanical properties of the Lark‐Loop techniques need further study. The purpose of this study was to compare the displacement, ultimate load to failure, and stiffness between the Lark‐Loop technique and other common tendon suture techniques (including Krackow and Lasso‐Loop). Since the tendon‐suture interface is the weak link in tenodesis using anchors, it was hypothesized that the Lark‐Loop suture technique would provide better biomechanical results in terms of tendon‐suture fixation strength compared to the Lasso‐Loop suture technique. The study found that the elongation between the three groups (Lasso‐Loop, Lark‐Loop and Krackow) was no different.
Porcine superficial digital flexor tendons were chosen in this laboratory study, as these tendons were similar to human LHB in anatomic appearance and biomechanical properties. In addition, the superficial digital flexor tendons with a wide range of sources were considered an ideal substitute for limited cadaveric specimen resources. , All tendons were harvested from the forehoof of 6‐month‐old pigs after slaughtering in a local slaughterhouse. Since all the selected tendons for this study were harvested from porcine used for meat production instead research purposes, no animal ethics approval was required. The flow chart of the study and the parameters to be tested are described in Fig. . All the tendon specimens were harvested and isolated from 6‐ to 9‐month‐old pigs within an hour after slaughtering and were directly frozen at −20°C. In total, 33 fresh frozen porcine superficial flexor tendons were randomly assigned to three groups ( n = 11): Lasso‐Loop, n = 11, Lark‐Loop, n = 11, Krackow, n = 11. The tendons were stored at −20°C and thawed to room temperature 24 h in advance before the beginning of the experiment. Saline solution (0.9%) was periodically sprayed onto the surface of tendons to maintain moisture during preparation and testing. None of the tendons had degenerative or pathological changes. Surgical Techniques The 11 tendons in the Lasso‐Loop group were sutured with No. 2 FiberWire suture (Arthrex, Naples, FL, USA) according to the protocol of Lafosse et al . the midportion of the No. 2 FiberWire suture is not completely passed through the tendon at the 1 cm from the distal end of the tendon. Then one end of the suture is passed back and threaded through the loop, hereby creating the Lasso‐Loop construct (Fig. ). Eleven tendons in the Lark‐Loop group were sutured with No. 2 FiberWire according to the protocol of Zhou et al . First, the No. 2 FiberWire suture was folded in half to encircle the tendon, with the two suture strands threaded into the loop to construct a Lark's Head Knot 1 cm from the distal end of the tendon. Then a needle with the No. 2 FiberWire pierced through the mid‐center of the tendon, as a guiding suture. The piercing point is close to the Lark's Head Knot but on the side distal to the tendon stump. Subsequently, an overhand knot was tied tightly over the two suture strands of the Lark's Head Knot with the guiding suture, and the guiding suture was pulled out of the tendon to shuttle the two free strands of the Lark's Head Knot through the tendon. The Lark‐Loop construct was formed after removing the guiding suture and tensioning the working suture to remove excess suture within the tendon (Fig. ). Eleven tendons in the Krackow group were sutured with No. 2 FiberWire suture according to the protocol of Deramo et al . The first Krackow stitch was placed at 1 cm from the distal end of the tendon with two locking loops. The needle pitches were evenly maintained at 0.5 cm (Fig. ). All the suture constructs were operated by the same experienced orthopedic surgeon for each tendon (Fig. ). Biomechanical Testing The BOSE testing machine (ElectroForce 3500; Bose Corporation, ElectroForce Systems Group, Eden Prairie, MN, USA) was used to perform biomechanical testing (Fig. ). First, each tendon was fixed at the sinusoid clamp and maintained at an equal 3 cm in length of the free tendon. Then, the two strands of the suture end were looped with 6‐throw square knots over the post of the adapter of the testing machine. The tendon was pre‐loaded in tension with 5 N for 2 min. A purple dot was marked on each suture at the point where it pierced the tendon ends, as the maker indicated the displacement in pre‐tension and cyclical loading. The tendons were then cyclically loaded with tension from 5 to 30 N for 500 cycles at 2 Hz. Elongation of the suture after cyclical loading was defined as the displacement in the distance between the purple marker after pre‐tension and cyclical loading, measured with an 8.9‐megapixel digital camera (EOS 60D; Canon, Tokyo, Japan). Displacement of the suture after cyclical loading was defined as the displacement in the distance between the purple marker after pre‐tension and cyclical loading, measured with image analysis software (ImageJ software, version 1.53j; National Institutes of Health, Bethesda, MD, USA). After cyclical loading, all the tendons were loaded to failure at the rate of 1 mm/s. The ultimate failure load was defined as the maximum tensile force, and the failure types were also recorded (suture breakage, tendon rupture, and suture cutting through the tendon). Finally, the stiffness for each tendon‐suture construction is calculated in the linear region of the load–displacement curve. Statistical Analysis The sample size was estimated using a sample size analysis software, PASS 15 (NCSS, LLC. Kaysville, UT, USA), based on our preliminary experiment results regarding the displacement of the suture. In this initial experiment, nine tendons were randomly allocated to three groups (Lasso‐Loop 2.78 ± 0.52 mm, Lark‐Loop 2.10 ± 0.43 mm, Krackow 2.09 ± 0.41). The α and power (1‐β) were set as 0.05 and 0.80, respectively. Considering the possible specimen's loss during testing, 10% more samples were added in each group. Finally, the sample size of 11 specimens in each group and 33 in total was required. Statistics analysis was conducted with SPSS Statistics for Windows, Version 25.0 (IBM Corp, Armonk, NY, USA). All the continuous outcomes were presented as mean ± standard deviation. The Kruskal‐Wallis test was used to compare the displacement, ultimate load to failure, and the stiffness. Statistical significance was set at a level of P < 0.05. Then, post hoc analysis with Mann–Whitney U ‐test test with a Bonferroni correction was conducted for multiple comparisons between each suture group.
The 11 tendons in the Lasso‐Loop group were sutured with No. 2 FiberWire suture (Arthrex, Naples, FL, USA) according to the protocol of Lafosse et al . the midportion of the No. 2 FiberWire suture is not completely passed through the tendon at the 1 cm from the distal end of the tendon. Then one end of the suture is passed back and threaded through the loop, hereby creating the Lasso‐Loop construct (Fig. ). Eleven tendons in the Lark‐Loop group were sutured with No. 2 FiberWire according to the protocol of Zhou et al . First, the No. 2 FiberWire suture was folded in half to encircle the tendon, with the two suture strands threaded into the loop to construct a Lark's Head Knot 1 cm from the distal end of the tendon. Then a needle with the No. 2 FiberWire pierced through the mid‐center of the tendon, as a guiding suture. The piercing point is close to the Lark's Head Knot but on the side distal to the tendon stump. Subsequently, an overhand knot was tied tightly over the two suture strands of the Lark's Head Knot with the guiding suture, and the guiding suture was pulled out of the tendon to shuttle the two free strands of the Lark's Head Knot through the tendon. The Lark‐Loop construct was formed after removing the guiding suture and tensioning the working suture to remove excess suture within the tendon (Fig. ). Eleven tendons in the Krackow group were sutured with No. 2 FiberWire suture according to the protocol of Deramo et al . The first Krackow stitch was placed at 1 cm from the distal end of the tendon with two locking loops. The needle pitches were evenly maintained at 0.5 cm (Fig. ). All the suture constructs were operated by the same experienced orthopedic surgeon for each tendon (Fig. ).
The BOSE testing machine (ElectroForce 3500; Bose Corporation, ElectroForce Systems Group, Eden Prairie, MN, USA) was used to perform biomechanical testing (Fig. ). First, each tendon was fixed at the sinusoid clamp and maintained at an equal 3 cm in length of the free tendon. Then, the two strands of the suture end were looped with 6‐throw square knots over the post of the adapter of the testing machine. The tendon was pre‐loaded in tension with 5 N for 2 min. A purple dot was marked on each suture at the point where it pierced the tendon ends, as the maker indicated the displacement in pre‐tension and cyclical loading. The tendons were then cyclically loaded with tension from 5 to 30 N for 500 cycles at 2 Hz. Elongation of the suture after cyclical loading was defined as the displacement in the distance between the purple marker after pre‐tension and cyclical loading, measured with an 8.9‐megapixel digital camera (EOS 60D; Canon, Tokyo, Japan). Displacement of the suture after cyclical loading was defined as the displacement in the distance between the purple marker after pre‐tension and cyclical loading, measured with image analysis software (ImageJ software, version 1.53j; National Institutes of Health, Bethesda, MD, USA). After cyclical loading, all the tendons were loaded to failure at the rate of 1 mm/s. The ultimate failure load was defined as the maximum tensile force, and the failure types were also recorded (suture breakage, tendon rupture, and suture cutting through the tendon). Finally, the stiffness for each tendon‐suture construction is calculated in the linear region of the load–displacement curve.
The sample size was estimated using a sample size analysis software, PASS 15 (NCSS, LLC. Kaysville, UT, USA), based on our preliminary experiment results regarding the displacement of the suture. In this initial experiment, nine tendons were randomly allocated to three groups (Lasso‐Loop 2.78 ± 0.52 mm, Lark‐Loop 2.10 ± 0.43 mm, Krackow 2.09 ± 0.41). The α and power (1‐β) were set as 0.05 and 0.80, respectively. Considering the possible specimen's loss during testing, 10% more samples were added in each group. Finally, the sample size of 11 specimens in each group and 33 in total was required. Statistics analysis was conducted with SPSS Statistics for Windows, Version 25.0 (IBM Corp, Armonk, NY, USA). All the continuous outcomes were presented as mean ± standard deviation. The Kruskal‐Wallis test was used to compare the displacement, ultimate load to failure, and the stiffness. Statistical significance was set at a level of P < 0.05. Then, post hoc analysis with Mann–Whitney U ‐test test with a Bonferroni correction was conducted for multiple comparisons between each suture group.
Displacement There were significant differences in displacement among the Lark‐Loop group (2.00 ± 0.50), Krakow group (1.95 ± 0.42 mm), and Lasso‐Loop group (2.91 ± 0.63 mm) ( P = 0.0002). Furthermore, post hoc analysis after Bonferroni correction showed that there was no statistical difference between the Lark‐Loop group and the Krackow group (MD = 0.05, 95%CI: −0.52 to 0.61, P > 0.9999), but both were significantly less than the Lasso‐Loop group [(MD = 0.91, 95%CI: 0.35 to 1.47, P = 0.0009), (MD = 0.95, 95%CI: 0.39 to 1.5, P = 0.0005)] (Fig. ). Ultimate Load to Failure There were significant differences in ultimate load to failure among the Lark‐Loop group (325.89 ± 12.01°N), Krakow group (301.51 ± 13.17°N), and Lasso‐Loop group (141.51 ± 33.02°N) ( P < 0.0001). Furthermore, post hoc analysis after Bonferroni correction showed that there was no statistical difference between the Lark‐Loop group and the Krackow group (MD = 24.47, 95%CI: −6.084 to 55.03, P = 0.1463), but both were significantly greater than the Lasso‐Loop group [(MD = −184.5, 95%CI: −215.0 to −153.9, p < 0.0001), (MD = −160.0, 95%CI: −190.6 to −129.4, P < 0.0001)] (Fig. ). Stiffness There were significant differences in stiffness among the Lark‐Loop group (25.39 ± 2.68°N/mm), Krakow group (23.82 ± 1.67°N/mm), and Lasso‐Loop group (14.34 ± 1.49°N/mm) ( P < 0.0001). Furthermore, post hoc analysis after Bonferroni correction showed that there was no statistical difference between the Lark‐Loop group and the Krackow group (MD = 1.571, 95%CI: −1.238 to 4.379, P = 0.4718), but both were significantly greater than the Lasso‐Loop group [(MD = −11.05, 95%CI: −13.86 to −8.24, P < 0.0001), (MD = −9.477, 95%CI: −12.29 to −6.67, P < 0.0001)] (Fig. ). Failure Mode In the Lark‐loop and Krackow group, all the tendons failed by suture breakage, while all the tendons failed by suture cutting through the tendon in the Lasso‐Loop group (Fig. ).
There were significant differences in displacement among the Lark‐Loop group (2.00 ± 0.50), Krakow group (1.95 ± 0.42 mm), and Lasso‐Loop group (2.91 ± 0.63 mm) ( P = 0.0002). Furthermore, post hoc analysis after Bonferroni correction showed that there was no statistical difference between the Lark‐Loop group and the Krackow group (MD = 0.05, 95%CI: −0.52 to 0.61, P > 0.9999), but both were significantly less than the Lasso‐Loop group [(MD = 0.91, 95%CI: 0.35 to 1.47, P = 0.0009), (MD = 0.95, 95%CI: 0.39 to 1.5, P = 0.0005)] (Fig. ).
There were significant differences in ultimate load to failure among the Lark‐Loop group (325.89 ± 12.01°N), Krakow group (301.51 ± 13.17°N), and Lasso‐Loop group (141.51 ± 33.02°N) ( P < 0.0001). Furthermore, post hoc analysis after Bonferroni correction showed that there was no statistical difference between the Lark‐Loop group and the Krackow group (MD = 24.47, 95%CI: −6.084 to 55.03, P = 0.1463), but both were significantly greater than the Lasso‐Loop group [(MD = −184.5, 95%CI: −215.0 to −153.9, p < 0.0001), (MD = −160.0, 95%CI: −190.6 to −129.4, P < 0.0001)] (Fig. ).
There were significant differences in stiffness among the Lark‐Loop group (25.39 ± 2.68°N/mm), Krakow group (23.82 ± 1.67°N/mm), and Lasso‐Loop group (14.34 ± 1.49°N/mm) ( P < 0.0001). Furthermore, post hoc analysis after Bonferroni correction showed that there was no statistical difference between the Lark‐Loop group and the Krackow group (MD = 1.571, 95%CI: −1.238 to 4.379, P = 0.4718), but both were significantly greater than the Lasso‐Loop group [(MD = −11.05, 95%CI: −13.86 to −8.24, P < 0.0001), (MD = −9.477, 95%CI: −12.29 to −6.67, P < 0.0001)] (Fig. ).
In the Lark‐loop and Krackow group, all the tendons failed by suture breakage, while all the tendons failed by suture cutting through the tendon in the Lasso‐Loop group (Fig. ).
In this study, the Lark‐Loop suture technique resulted in a statistically equivalent biomechanical profile to the Krackow suture technique, and a statistically superior biomechanical profile to the Lasso‐Loop suture technique. Compared with Lasso‐Loop technique, the Lark‐Loop technique demonstrated lesser displacement, stronger ultimate load to failure and, bigger stiff. This result indicates that the Lark‐Loop has comparable mechanical properties to the Krackow suture technique. However, the Krackow suture not only requires the surgeon to suture the tendon externally but also to puncture the tendon multiple times, which inevitably increases the operative time. On the contrary, Lark‐Loop tenodesis has identical advantages to Lasso‐Loop in that tenodesis can be done thoroughly by arthroscopy and requires only once piercing of the tendon. All‐Arthroscopic Suprapectoral Biceps Tenodesis Although some scholars believe that open subpectoral LHBT tenodesis is more reliable in treating LHBT lesions, more studies have shown that there were no significant differences in postoperative pain and function restoration between arthroscopy and open treatment. , Because arthroscopic surgery avoids additional incisions and deltoid dissection, it is widely welcomed by doctors and patients. In the last 10–15 years, all‐arthroscopic LHBT tenodesis has become the mainstream surgical approach for treating symptomatic LHBT lesions. , The Lasso‐Loop suture technique has a reliable clinical outcome and is currently the commonly used arthroscopic onlay tenodesis technique for LHBT tenodesis in the intertubercular groove. , , , Although the Lasso‐Loop suture technique is easy to operate under the arthroscope with a strong tissue grasping ability, according to recent biomechanical studies, the Lasso‐Loop suture technique shows biomechanical defects of uneven suture tendon load distribution. , At the same time, the suture can easily cut the tendon and causes fixation failure, resulting in inferior biomechanical results compared with other techniques. Muller et al . have recently modified the Lasso‐Loop, although its maximum load, displacement, and stiffness have been improved to some extent. Still, most of the tendon‐suture constructs failed at suture cutting through the tendon, indicating that the safety of this fixation technique is still insufficient. In this study, the biomechanical of Lasso‐Loop suture technique had also been researched. The maximum failure load and failure mode of it were consistent with their results above‐mentioned (about 150°N, longitudinal tendon cutting). Therefore, the insecurity of the Lasso‐Loop suture technique in biceps onlay tenodesis had been proved again. Displacement, Ultimate Load to Failure and Stiffness in Different Studies Displacement as a parameter of primary tenodesis stability has been evaluated by many researchers. The displacement of Lasso‐loop technique was 2.91 ± 0.63 mm in this study. It was higher compared to published literatures, the displacement of a range between 0.7 and 2.6 mm for Lasso‐Loop stitch biceps tenodesis has been reported. , , However, the cyclic loading in these studies was 20 or 100 cycles. But in our study, the tendons were cyclically loaded for 500 cycles (currently recognized), before to be loaded to failure, which maybe one of reasons for higher displacement. Another reason to account for this situation may be due to different loads applied (5–20°N, Ponce et al . 28 ; 5–50°N, Patzer et al . ). Lower cyclic load also can obtain lower displacement. Although the displacement of the Lasso‐loop suture technique varies from study to study. But in this study, we can find that the displacement of Lasso‐loop suture technique was significantly higher than that of Lark‐Loop and Krackow suture techniques. According to research, it takes an average of about 112 N to flex the elbow to 90° while holding a 1 kg weight. Therefore, a tendon‐suture fixation is considered reliable when the ultimate load to failure is over 112°N. Theoretically, although the ultimate load to failure of the Lasso‐Loop suture techniques is inferior to the Lark‐Loop and the Krackow, Lasso‐Loop is still considered to provide relatively secure strength to maintain daily activities at zero. , However, in the follow‐up of LHBT tenodesis with Lasso‐Loop, it was found that the incidence of Popeye's sign was still high. This shows that the daily load of the elbow flexion is far more than 1 kg, which requires a larger failure load to resist the force before the tendon‐to‐bone biological heal. The ultimate load to failure of the Lark‐Loop stitch is as high as 325 N, which is much better than that of the Lasso‐Loop. This is due to the Lark‐Loop stitch holding tendon tissue with the two suture ends piercing through the middle portion of the tendon. When tension is applied to the ends of the two sutures, the Lark‐Loop stitch self‐tightening provides good tendon grip. Therefore, we have reason to believe that this simple Lark‐Loop suture technique would be a safe and good choice for arthroscopic fixation of the long head of the biceps tendon. Similarly, the stiffness of the Lasso‐Loop can be found in many literatures, however, values vary widely. Kaback et a.l reported stiffness values of only 4.5°N/mm for the Lasso‐Loop stitch while Müller et al . showed values of 13.1°N/mm for the Lasso‐Loop technique. The modified Lasso‐Loop stitch, 360Lasso‐Loop, had been modified to increase the stiffness of Lasso‐Loop technique. After modifying, the stiffness value of Lasso‐Loop is increased to a certain extent (19.1°N/mm). In this study, the stiffness of Lark‐Loop suture technique also achieves a great improvement, and showed significant higher than of the Lasso‐Loop (25.39 ± 2.68°N/mm, 14.34 ± 1.49°N/mm). The Sutures in the Lark‐Loop and Krackow Techniques The Krackow suture is one of traditional methods to repair tendons. In order to determine whether the Lark‐Loop technique has similar biomechanical properties to the Krackow technique, the difference between the Lark‐Loop and Krackow techniques has also been compared. The data from this study showed the Lark‐Loop suture technique achieved biomechanical properties similar to the Krackow suture technique (displacement, ultimate load to failure and stiffness). Some may question that our Krackow stitching method is not the same as the classic Krackow method. Because the Krackow stitching technique here uses one suture with two locking loops on each side, while the classic one is one suture with three locking loops. However, as previously demonstrated, in the case of the Krackow technique with one suture, whether it is two locking loops, four locking loops or six locking loops, the maximum failure load does not change significantly. According to the experience of many experienced clinicians, the more the tendon is sutured, the more the damage to the tendon, the more locking loops in the tendon, the greater the impact on the suture elongation, and ultimately leads to the failure of tendon‐bone fixation. Therefore, we believe that Lark‐Loop technique has similar biomechanical properties to the classic Krackow technique. Limitations and Strengths Although the superior mechanical properties of the Lark Loop suture technique have been proven in vitro with porcine superficial flexor tendons in this study, there are some limitations to our experiments. First, due to the problematic source of cadavers, all the experiment subjects of our biomechanical tests were based on the model of the porcine superficial flexor tendons. Therefore, the porcine tendons are not fully representative of human tendons, and the mechanical results of this study should be interpreted with caution. However, several studies have also performed biomechanical testing using the porcine flexor digitorum superficialis tendon because it exhibits similar anatomical and biomechanical properties to the human biceps long head tendon. Second, the mechanical results at zero‐time in vitro cannot accurately represent the mechanical changes in the tendon‐to‐bone healing process under physiological conditions in vivo . Relevant animal research has been currently in progress to further verify the safety and superiority of this technology before promotion. Finally, this biomechanical study only analyzes the suture structure of suture and tendon, but does not combine with anchor under arthroscopy. In order to analyze this structure more accurately, it may be necessary to further study the combination of anchor fixation and related suture techniques. Conclusion The Lark‐Loop suture technique has better biomechanical properties in terms of tendon‐suture‐interface like the Krackow suture, and the Lark‐Loop suture technique has the same characteristics as the Lasso‐Loop suture technique, which is easy to operate under all arthroscopy. Therefore, this technique may be beneficial for arthroscopic fixation of the long head of the biceps tendon.
Although some scholars believe that open subpectoral LHBT tenodesis is more reliable in treating LHBT lesions, more studies have shown that there were no significant differences in postoperative pain and function restoration between arthroscopy and open treatment. , Because arthroscopic surgery avoids additional incisions and deltoid dissection, it is widely welcomed by doctors and patients. In the last 10–15 years, all‐arthroscopic LHBT tenodesis has become the mainstream surgical approach for treating symptomatic LHBT lesions. , The Lasso‐Loop suture technique has a reliable clinical outcome and is currently the commonly used arthroscopic onlay tenodesis technique for LHBT tenodesis in the intertubercular groove. , , , Although the Lasso‐Loop suture technique is easy to operate under the arthroscope with a strong tissue grasping ability, according to recent biomechanical studies, the Lasso‐Loop suture technique shows biomechanical defects of uneven suture tendon load distribution. , At the same time, the suture can easily cut the tendon and causes fixation failure, resulting in inferior biomechanical results compared with other techniques. Muller et al . have recently modified the Lasso‐Loop, although its maximum load, displacement, and stiffness have been improved to some extent. Still, most of the tendon‐suture constructs failed at suture cutting through the tendon, indicating that the safety of this fixation technique is still insufficient. In this study, the biomechanical of Lasso‐Loop suture technique had also been researched. The maximum failure load and failure mode of it were consistent with their results above‐mentioned (about 150°N, longitudinal tendon cutting). Therefore, the insecurity of the Lasso‐Loop suture technique in biceps onlay tenodesis had been proved again.
Displacement as a parameter of primary tenodesis stability has been evaluated by many researchers. The displacement of Lasso‐loop technique was 2.91 ± 0.63 mm in this study. It was higher compared to published literatures, the displacement of a range between 0.7 and 2.6 mm for Lasso‐Loop stitch biceps tenodesis has been reported. , , However, the cyclic loading in these studies was 20 or 100 cycles. But in our study, the tendons were cyclically loaded for 500 cycles (currently recognized), before to be loaded to failure, which maybe one of reasons for higher displacement. Another reason to account for this situation may be due to different loads applied (5–20°N, Ponce et al . 28 ; 5–50°N, Patzer et al . ). Lower cyclic load also can obtain lower displacement. Although the displacement of the Lasso‐loop suture technique varies from study to study. But in this study, we can find that the displacement of Lasso‐loop suture technique was significantly higher than that of Lark‐Loop and Krackow suture techniques. According to research, it takes an average of about 112 N to flex the elbow to 90° while holding a 1 kg weight. Therefore, a tendon‐suture fixation is considered reliable when the ultimate load to failure is over 112°N. Theoretically, although the ultimate load to failure of the Lasso‐Loop suture techniques is inferior to the Lark‐Loop and the Krackow, Lasso‐Loop is still considered to provide relatively secure strength to maintain daily activities at zero. , However, in the follow‐up of LHBT tenodesis with Lasso‐Loop, it was found that the incidence of Popeye's sign was still high. This shows that the daily load of the elbow flexion is far more than 1 kg, which requires a larger failure load to resist the force before the tendon‐to‐bone biological heal. The ultimate load to failure of the Lark‐Loop stitch is as high as 325 N, which is much better than that of the Lasso‐Loop. This is due to the Lark‐Loop stitch holding tendon tissue with the two suture ends piercing through the middle portion of the tendon. When tension is applied to the ends of the two sutures, the Lark‐Loop stitch self‐tightening provides good tendon grip. Therefore, we have reason to believe that this simple Lark‐Loop suture technique would be a safe and good choice for arthroscopic fixation of the long head of the biceps tendon. Similarly, the stiffness of the Lasso‐Loop can be found in many literatures, however, values vary widely. Kaback et a.l reported stiffness values of only 4.5°N/mm for the Lasso‐Loop stitch while Müller et al . showed values of 13.1°N/mm for the Lasso‐Loop technique. The modified Lasso‐Loop stitch, 360Lasso‐Loop, had been modified to increase the stiffness of Lasso‐Loop technique. After modifying, the stiffness value of Lasso‐Loop is increased to a certain extent (19.1°N/mm). In this study, the stiffness of Lark‐Loop suture technique also achieves a great improvement, and showed significant higher than of the Lasso‐Loop (25.39 ± 2.68°N/mm, 14.34 ± 1.49°N/mm).
The Krackow suture is one of traditional methods to repair tendons. In order to determine whether the Lark‐Loop technique has similar biomechanical properties to the Krackow technique, the difference between the Lark‐Loop and Krackow techniques has also been compared. The data from this study showed the Lark‐Loop suture technique achieved biomechanical properties similar to the Krackow suture technique (displacement, ultimate load to failure and stiffness). Some may question that our Krackow stitching method is not the same as the classic Krackow method. Because the Krackow stitching technique here uses one suture with two locking loops on each side, while the classic one is one suture with three locking loops. However, as previously demonstrated, in the case of the Krackow technique with one suture, whether it is two locking loops, four locking loops or six locking loops, the maximum failure load does not change significantly. According to the experience of many experienced clinicians, the more the tendon is sutured, the more the damage to the tendon, the more locking loops in the tendon, the greater the impact on the suture elongation, and ultimately leads to the failure of tendon‐bone fixation. Therefore, we believe that Lark‐Loop technique has similar biomechanical properties to the classic Krackow technique.
Although the superior mechanical properties of the Lark Loop suture technique have been proven in vitro with porcine superficial flexor tendons in this study, there are some limitations to our experiments. First, due to the problematic source of cadavers, all the experiment subjects of our biomechanical tests were based on the model of the porcine superficial flexor tendons. Therefore, the porcine tendons are not fully representative of human tendons, and the mechanical results of this study should be interpreted with caution. However, several studies have also performed biomechanical testing using the porcine flexor digitorum superficialis tendon because it exhibits similar anatomical and biomechanical properties to the human biceps long head tendon. Second, the mechanical results at zero‐time in vitro cannot accurately represent the mechanical changes in the tendon‐to‐bone healing process under physiological conditions in vivo . Relevant animal research has been currently in progress to further verify the safety and superiority of this technology before promotion. Finally, this biomechanical study only analyzes the suture structure of suture and tendon, but does not combine with anchor under arthroscopy. In order to analyze this structure more accurately, it may be necessary to further study the combination of anchor fixation and related suture techniques.
The Lark‐Loop suture technique has better biomechanical properties in terms of tendon‐suture‐interface like the Krackow suture, and the Lark‐Loop suture technique has the same characteristics as the Lasso‐Loop suture technique, which is easy to operate under all arthroscopy. Therefore, this technique may be beneficial for arthroscopic fixation of the long head of the biceps tendon.
Min Zhou, Chuanhai Zhou and Dedong Cui designed the study and contributed to writing of the draft. Yi Long and Yan Yan contributed to data analysis and solved technical problems in software. Zhenze Zheng and Ke Meng contributed to data collection. Jinming Zhang participated in data extraction and analysis assistance. Jingyi Hou and Rui Yang participated in the design of this research and provided guidance and troubleshooting. All authors agree to be accountable for all aspects of the work. All authors read and approved the final manuscript. All authors contributed to the article and approved the submitted version.
This work was supported by the National Natural Science Foundation of China (NO. 81972067, 82002342) and the Fundamental Research Funds for the Central Universities, Sun Yat‐sen University (NO. 2020004).
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
|
HPV vaccine narratives on Twitter during the COVID-19 pandemic: a social network, thematic, and sentiment analysis
|
1ed69095-1933-442f-aaf7-725ff8ab50b4
|
10102693
|
Health Communication[mh]
|
Human papillomavirus (HPV) is the most prevalent sexually transmitted infection (STI) in the world and is associated with the development of multiple cancers (e.g., cervical cancer, anal cancer, oropharyngeal cancer) and health conditions (e.g., genital warts) . Most of these cancer cases are caused by nine types of HPV , and these high-risk HPV types are preventable with a safe and effective HPV vaccine that has been available since 2006 . Furthermore, HPV is attributable to 4.5% of all cancers (8.6% in women and 0.8% in men) . To reduce the global burden of cancer, and particularly cervical cancer, the World Health Organization (WHO) has set a goal that by 2030, 90% of girls in the world will have received the HPV vaccine by age 15 . Yet globally, we are still not on track to meet this goal, even in high-income countries with consistent access to the HPV vaccine. Researchers have theorized that in high-income countries, misinformation plays a large role in vaccine hesitancy and, as a result, a sub-optimal HPV vaccination rate . The increasing use of online sources for health information among the public also has the potential to impact vaccination uptake. While the online environment has the potential to enhance knowledge about vaccines and improve attitudes towards immunization as people share knowledge and experiences, it can also create an environment that spreads and amplifies misinformation about vaccination, including the HPV vaccine. In their study of HPV related tweets, Dunn and colleagues found that approximately 25% of tweets espoused negative sentiments towards HPV immunization, and that exposure to these messages increased the likelihood of the reader subsequently posting their own negative-sentiment tweets towards the HPV vaccine. After critical appraisal, the researchers found that negative-sentiment tweets on HPV vaccination tended to be characterized by misinformation, and often leveraged opinion or anecdotes as evidence, rather than citing scientific information . When these negative messages are shared in communities formed on social media platforms such as Twitter, they may be widely spread and rapidly amplified as they reverberate through social networks, leading to the pervasive spread of “unbalanced, distorted, or inaccurate information about vaccines” [9, p.2] that becomes difficult to counter with health promotion messaging. Exposure to misinformation, or false information, has emerged as a public health concern since research has shown that even small exposures to anti-vaccination messaging in online settings (even as little as five minutes), can have a measurable negative impact on individuals’ attitudes and intent towards immunization . These exposures to negative messaging can have measurable impacts of vaccination rates, as demonstrated by a large American study linking state-level HPV vaccine rates to the predominant tone on social media . On social media, sentiment towards HPV vaccination varies by platform, with Twitter and Instagram tending to be more positive toward HPV vaccination, whereas YouTube and Facebook being more negative. Unfortunately, research has shown that users exposed to HPV vaccine messages are more likely to remember the messages surrounding alleged harms of the vaccine rather than its potential benefits . This supports research suggesting public health strategies which emphasize the provision of statistical information to vaccine sceptics can be less effective than information which focuses on conveying general takeaways and is framed to have an emotive appeal to the target’s personal beliefs and reference group . Emotional resonance of information is highly impactful; one study demonstrated that parents who are exposed to both positive and negative messages about the HPV vaccine were less likely to vaccinate their children compared to those only exposed to positive messaging . Overall, the literature demonstrates that acceptance and uptake of the HPV vaccine is strongly tied to the information the prospective recipient is exposed to, with misinformation driving negative sentiment negatively impacting the recipient’s likelihood of consenting to vaccination. The past two years of the Coronavirus Disease 2019 (COVID-19) pandemic have seen increases in online interactions during periods of isolation and social distancing , as well as the intensification of the spread of health misinformation online . While discussions around COVID-19 have increased exponentially, recent research tracking public conversations has indicated that alongside increases in discussions of COVID-19 and COVID-19 vaccination, interest in other vaccines has not decreased and in some periods has, in fact, increased . Some researchers have hypothesized this is due to benefits stemming from improved public awareness of the value of vaccines , while others worry that the rapid development and approval COVID-19 vaccines and subsequent vaccine mandates may have damaged public trust in institutions and impacted acceptance of other vaccines . This includes the HPV vaccine, which has experienced parental mistrust due to the perception that the vaccine is too new . This raises the question of whether the COVID-19 pandemic had on impact on vaccine hesitancy generally. Did greater public awareness of vaccines as a result of the saturated information environment caused by the COVID-19 pandemic increase vaccine scepticism in the public? This paper seeks to examine how discussions on the COVID-19 vaccines shaped the public’s attitude toward HPV vaccination. While the existing work on vaccine hesitancy largely suggests vaccine sceptics form their opinions about vaccines a priori to considering new vaccines, this research was done on novel vaccine development for smaller scale public health emergencies . Moreover, the COVID-19 pandemic has influenced routine immunization programs through delays to childhood immunization and school- and community-based immunization programs . Combined with decreases in screening uptake in community health centres due to the COVID-19 pandemic, researchers are expecting to see rises in vaccine preventable diseases and cancer incidence . Therefore, in this context, there is an urgent need to understand how the COVID-19 pandemic has impacted attitudes and sentiments on the HPV vaccine to inform the development of health communication strategies that address misinformation, with the goal of increasing vaccine acceptance and encouraging HPV vaccine uptake. Objectives To describe and characterize vaccine hesitant and vaccine confident networks of tweets about HPV and HPV vaccination on Twitter from January 2019 to May 2021. To determine how HPV vaccine themes and sentiments differ between vaccine confident and vaccine hesitant networks. To determine whether the themes and sentiments towards the HPV vaccine changed during the COVID-19 pandemic and the COVID-19 vaccine rollout. To describe and characterize vaccine hesitant and vaccine confident networks of tweets about HPV and HPV vaccination on Twitter from January 2019 to May 2021. To determine how HPV vaccine themes and sentiments differ between vaccine confident and vaccine hesitant networks. To determine whether the themes and sentiments towards the HPV vaccine changed during the COVID-19 pandemic and the COVID-19 vaccine rollout. Data collection The Academic Research Product Track Application Programming Interface (API) from Twitter was used to collect global tweets from January 2019 to May 2021. Keywords related to HPV vaccination were informed by a rapid review of 13 peer-reviewed articles published between 2015 and 2020 focused on HPV vaccination. These keywords (e.g., “HPV” OR “Gardasil” OR “Cervarix”) were used to gather tweets and re-tweets on HPV vaccination from individual Twitter accounts. From this same dataset, a Boolean search using the keywords (“COVID” OR “corona”) was conducted to collect conversations around the COVID-19 pandemic and vaccinations. All data was imported, cleaned, and analyzed in Python version 3.8.5. Social network analysis (SNA) For this study, we first used SNA to identify social media accounts expressing confidence in or hesitance toward HPV vaccination. We created a network displaying the relationship between user accounts and retweets of other accounts (Fig. ). This was to identify influential accounts, their level of influence, and their connections. While we recognize tweets do not provide an exact indication of like-mindedness, on aggregate, users who exhibit social or intellectual homophily are more likely to interact with each other on social media . The Louvain modularity method was used to determine subclusters of online communities discussing HPV vaccination . The key influencers from the subclusters were studied to classify vaccine confidence and hesitancy networks in the social media space. Sentiment analysis and thematic clustering Social media conversations around HPV vaccines were analyzed using natural language processing. The tweets were first cleaned and processed for analysis using the Natural Language Toolkit (NLTK) library in Python. The topic themes of the tweets were identified using a mixed-method approach of unsupervised machine learning and qualitative content analysis of vaccine confident and vaccine hesitant tweets. An agglomerative hierarchical cluster model was first developed to detect clusters of topic themes . We then measured the term frequency-inverse document frequency (TF-IDF) of the clusters which calculates the relevance of a word (term frequency) in a document among a collection of documents (inverse document frequency). This was used to measure undervalue words that appeared often and provided little information and overvalue words that appeared only occasionally in the corpus, but often in some documents. We performed qualitative content analysis to infer themes from our clustering model, using both TF-IDF outputs and a typology of themes that included examples and definitions of each theme. This allowed us to identify the predominant narrative theme in each cluster. The cluster analyses were conducted independently by two analysts (SK and JE), and the theme labelling of the clusters was reviewed to ensure consistency in the coding process. The review process was repeated until the consensus of the topic theme for each cluster was reached. To determine sentiment, we used a supervised model where a random sample of tweets were labeled by our analysts along three categories: positive, negative, and neutral. We employed a BERT (Bidirectional Encoder Representations from Transformers) to classify the sentiment of the tweets. BERT was used to learn the contextual relationship between words and to generate word embedding features by converting each tweet into a 768-dimension vector. The model was further fine-tuned by adding a sentiment classification layer to classify whether a tweet is negative, neutral, or positive in tone. The supervised model to classify sentiments for vaccine confident tweets had an accuracy score of 96.8% and the vaccine hesitant supervised model had an accuracy of 97.3%. The Academic Research Product Track Application Programming Interface (API) from Twitter was used to collect global tweets from January 2019 to May 2021. Keywords related to HPV vaccination were informed by a rapid review of 13 peer-reviewed articles published between 2015 and 2020 focused on HPV vaccination. These keywords (e.g., “HPV” OR “Gardasil” OR “Cervarix”) were used to gather tweets and re-tweets on HPV vaccination from individual Twitter accounts. From this same dataset, a Boolean search using the keywords (“COVID” OR “corona”) was conducted to collect conversations around the COVID-19 pandemic and vaccinations. All data was imported, cleaned, and analyzed in Python version 3.8.5. For this study, we first used SNA to identify social media accounts expressing confidence in or hesitance toward HPV vaccination. We created a network displaying the relationship between user accounts and retweets of other accounts (Fig. ). This was to identify influential accounts, their level of influence, and their connections. While we recognize tweets do not provide an exact indication of like-mindedness, on aggregate, users who exhibit social or intellectual homophily are more likely to interact with each other on social media . The Louvain modularity method was used to determine subclusters of online communities discussing HPV vaccination . The key influencers from the subclusters were studied to classify vaccine confidence and hesitancy networks in the social media space. Social media conversations around HPV vaccines were analyzed using natural language processing. The tweets were first cleaned and processed for analysis using the Natural Language Toolkit (NLTK) library in Python. The topic themes of the tweets were identified using a mixed-method approach of unsupervised machine learning and qualitative content analysis of vaccine confident and vaccine hesitant tweets. An agglomerative hierarchical cluster model was first developed to detect clusters of topic themes . We then measured the term frequency-inverse document frequency (TF-IDF) of the clusters which calculates the relevance of a word (term frequency) in a document among a collection of documents (inverse document frequency). This was used to measure undervalue words that appeared often and provided little information and overvalue words that appeared only occasionally in the corpus, but often in some documents. We performed qualitative content analysis to infer themes from our clustering model, using both TF-IDF outputs and a typology of themes that included examples and definitions of each theme. This allowed us to identify the predominant narrative theme in each cluster. The cluster analyses were conducted independently by two analysts (SK and JE), and the theme labelling of the clusters was reviewed to ensure consistency in the coding process. The review process was repeated until the consensus of the topic theme for each cluster was reached. To determine sentiment, we used a supervised model where a random sample of tweets were labeled by our analysts along three categories: positive, negative, and neutral. We employed a BERT (Bidirectional Encoder Representations from Transformers) to classify the sentiment of the tweets. BERT was used to learn the contextual relationship between words and to generate word embedding features by converting each tweet into a 768-dimension vector. The model was further fine-tuned by adding a sentiment classification layer to classify whether a tweet is negative, neutral, or positive in tone. The supervised model to classify sentiments for vaccine confident tweets had an accuracy score of 96.8% and the vaccine hesitant supervised model had an accuracy of 97.3%. An HPV Twitter dataset was collected from January 2019 to May 2021 consisting of 596,987 tweets from 316,835 individual Twitter accounts. From this data, we used SNA to detect the polarization of vaccine hesitant and vaccine confident conversations around HPV disease and immunizations. Figure displays the social network of Twitter accounts and retweet clusters of HPV-related discussions among the vaccine hesitant and vaccine confident communities. In total, 95,908 tweets (16.1%) were clearly associated with vaccine hesitant networks in red, and 234,015 (39.2%) tweets were vaccine confident conversations. Sentiment analysis Figure presents the distribution of sentiment in tweets over time by vaccine community type. The fine-tuned BERT model was trained to classify positive, neutral, and negative sentiment tweets. Our model identified 4,555 (4.7%) positive, 38,713 (40.4%) neutral, and 52,640 (54.9%) negative tweets from the vaccine hesitant community. The vaccine confident community produced 65,838 (28.1%) positive, 120,704 (51.6%) neutral, and 52,640 (22.5%) negative tweets. The overall proportion of all three sentiments increased in both vaccine hesitant and vaccine confident groups around February 2020, when the WHO declared COVID-19 outbreak a Public Health Emergency of International Concern. Comparing the sentiment of tweets in both vaccine confident and vaccine hesitant groups, as expected, the tweets of vaccine hesitant individuals were overall more negative in tone than those who were vaccine confident. Thematic clustering: vaccine safety, efficacy, and mistrust in institutions Through unsupervised machine learning, we identified the key narratives driving HPV vaccine hesitancy and confidence on social media. Tables and show the results from our topic theme analysis and utilization of TF-IDF output. As depicted in Table , vaccine hesitant narratives fall into three broad themes. Vaccine Safety was a large theme of discussion accounting for approximately 64.1% (n = 60,436) of vaccine hesitant tweets. Vaccine safety tweets primarily focused on the side effects of the Gardasil vaccine and listed allegedly harmful compounds in the vaccine. The second topic centred around Vaccine Effectiveness (21.3%, n = 13,830), questioning the effectiveness of the vaccine in preventing cancer and sharing statistics of cervical cancer cases in the UK to support this argument. Finally, approximately 14.7% (n = 9,081) of the vaccine hesitant network expressed Mistrust in Institutions and showed resentment towards the US government mandating HPV vaccines in school. Conversely, the vaccine confident network discussions were framed in five distinct ways, as shown in Table . Health Outcomes were largely emphasized by the vaccine confident, making up around 38.4% (n = 89,964) of conversations. The narratives of this topic mainly focused on the impacts of the virus. These tweets emphasized HPV-related diseases such as head, neck and oropharyngeal cancer, awareness of cervical cancer in women, and sexually transmitted infections. The second predominant theme was Vaccination Campaigns (24.6%, n = 57,504), which highlighted the work of public health organizations and HPV advocates and promoted World Cancer Day or HPV-related social events. Vaccine Effectiveness (16.5%, n = 38,606) was the third most prevalent theme in the vaccine confident space, and saw tweets shared focusing on the effectiveness of the vaccine in eradicating cancer-causing HPV infections. Another topic theme expressed Mistrust Towards Anti-vaxxers (11.4%, n = 26,717) spreading HPV vaccine misinformation and called out myths and conspiracy theories that are shared by anti-vaxxers. Finally, Vaccine Access (7.9%, n = 18,378), related primarily to the procurement and distribution of HPV vaccines, was the last theme observed in the vaccine confident space. This topic also focused on the importance of HPV vaccines for males and supported HPV vaccination programs for boys in schools. Furthermore, vaccine access tweets highlighted the free distribution of HPV vaccines in Kenya and acknowledged the prospective long-term reductions in cervical cancer in Rwanda due to the implementation of its HPV vaccine program. Figure displays the proportion of vaccine confident and vaccine hesitant topic themes from January 2019 to May 2021. Health outcomes remained the primary topic of discussion and vaccination campaigns were the second most common theme in vaccine confident discussions. The long-term trends in the data show vaccine safety as the predominant theme of discussion among the vaccine hesitant cluster. Co-discussion of COVID-19 and HPV in online conversations The trends of COVID-19 mentions in tweets for HPV vaccine confident and vaccine hesitant networks are illustrated in Fig. . Vaccine hesitant tweets mentioning COVID-19 picked up in March and June of 2020, before steadily fluctuating from October 2020 to April 2021. In contrast, the proportion of tweets mentioning COVID-19 among vaccine confident discussions showed a slight peak in April 2020 before a significant peak in April 2021. Both vaccine confident and hesitant networks showed a lull of COVID-19 mentions in their tweets over the summer and early autumn of 2020. Changes in interactions of vaccine confident and vaccine hesitant networks Between January 2019 to May 2021, we identified 93,498 unique Twitter accounts expressing vaccine confidence and 234,015 expressing vaccine hesitancy. Figure displays the new unique accounts that tweeted HPV-related tweets that month. Notably, following the WHO’s declaration of COVID-19 as a Public Health Emergency of International Concern, the number of new vaccine confident Twitter accounts decreased, while the number of new vaccine hesitant accounts increased. Figure presents the distribution of sentiment in tweets over time by vaccine community type. The fine-tuned BERT model was trained to classify positive, neutral, and negative sentiment tweets. Our model identified 4,555 (4.7%) positive, 38,713 (40.4%) neutral, and 52,640 (54.9%) negative tweets from the vaccine hesitant community. The vaccine confident community produced 65,838 (28.1%) positive, 120,704 (51.6%) neutral, and 52,640 (22.5%) negative tweets. The overall proportion of all three sentiments increased in both vaccine hesitant and vaccine confident groups around February 2020, when the WHO declared COVID-19 outbreak a Public Health Emergency of International Concern. Comparing the sentiment of tweets in both vaccine confident and vaccine hesitant groups, as expected, the tweets of vaccine hesitant individuals were overall more negative in tone than those who were vaccine confident. Through unsupervised machine learning, we identified the key narratives driving HPV vaccine hesitancy and confidence on social media. Tables and show the results from our topic theme analysis and utilization of TF-IDF output. As depicted in Table , vaccine hesitant narratives fall into three broad themes. Vaccine Safety was a large theme of discussion accounting for approximately 64.1% (n = 60,436) of vaccine hesitant tweets. Vaccine safety tweets primarily focused on the side effects of the Gardasil vaccine and listed allegedly harmful compounds in the vaccine. The second topic centred around Vaccine Effectiveness (21.3%, n = 13,830), questioning the effectiveness of the vaccine in preventing cancer and sharing statistics of cervical cancer cases in the UK to support this argument. Finally, approximately 14.7% (n = 9,081) of the vaccine hesitant network expressed Mistrust in Institutions and showed resentment towards the US government mandating HPV vaccines in school. Conversely, the vaccine confident network discussions were framed in five distinct ways, as shown in Table . Health Outcomes were largely emphasized by the vaccine confident, making up around 38.4% (n = 89,964) of conversations. The narratives of this topic mainly focused on the impacts of the virus. These tweets emphasized HPV-related diseases such as head, neck and oropharyngeal cancer, awareness of cervical cancer in women, and sexually transmitted infections. The second predominant theme was Vaccination Campaigns (24.6%, n = 57,504), which highlighted the work of public health organizations and HPV advocates and promoted World Cancer Day or HPV-related social events. Vaccine Effectiveness (16.5%, n = 38,606) was the third most prevalent theme in the vaccine confident space, and saw tweets shared focusing on the effectiveness of the vaccine in eradicating cancer-causing HPV infections. Another topic theme expressed Mistrust Towards Anti-vaxxers (11.4%, n = 26,717) spreading HPV vaccine misinformation and called out myths and conspiracy theories that are shared by anti-vaxxers. Finally, Vaccine Access (7.9%, n = 18,378), related primarily to the procurement and distribution of HPV vaccines, was the last theme observed in the vaccine confident space. This topic also focused on the importance of HPV vaccines for males and supported HPV vaccination programs for boys in schools. Furthermore, vaccine access tweets highlighted the free distribution of HPV vaccines in Kenya and acknowledged the prospective long-term reductions in cervical cancer in Rwanda due to the implementation of its HPV vaccine program. Figure displays the proportion of vaccine confident and vaccine hesitant topic themes from January 2019 to May 2021. Health outcomes remained the primary topic of discussion and vaccination campaigns were the second most common theme in vaccine confident discussions. The long-term trends in the data show vaccine safety as the predominant theme of discussion among the vaccine hesitant cluster. The trends of COVID-19 mentions in tweets for HPV vaccine confident and vaccine hesitant networks are illustrated in Fig. . Vaccine hesitant tweets mentioning COVID-19 picked up in March and June of 2020, before steadily fluctuating from October 2020 to April 2021. In contrast, the proportion of tweets mentioning COVID-19 among vaccine confident discussions showed a slight peak in April 2020 before a significant peak in April 2021. Both vaccine confident and hesitant networks showed a lull of COVID-19 mentions in their tweets over the summer and early autumn of 2020. Between January 2019 to May 2021, we identified 93,498 unique Twitter accounts expressing vaccine confidence and 234,015 expressing vaccine hesitancy. Figure displays the new unique accounts that tweeted HPV-related tweets that month. Notably, following the WHO’s declaration of COVID-19 as a Public Health Emergency of International Concern, the number of new vaccine confident Twitter accounts decreased, while the number of new vaccine hesitant accounts increased. Using data from over 500,000 tweets, we conducted social network analysis, sentiment analysis and thematic clustering to visualize the HPV vaccine hesitant and vaccine confident networks and describe the sentiments and themes of these networks’ conversations. Further, we assessed whether there had been any changes to sentiments or themes in each network during the pandemic and subsequent COVID-19 vaccine rollout. While similar studies have examined the online conversation regarding the HPV vaccine using social network analysis and machine learning techniques , the present study contributes to the body of research by examining the influence of the COVID-19 pandemic on HPV vaccine sentiments in vaccine confident and vaccine hesitant networks on Twitter. Thus, this study provides a novel context with which to conceptualize the online discourse around HPV vaccination. Characterisation of vaccine confident and vaccine hesitant network Looking at the aggregate of the collected tweets, certain characteristics are particularly prominent. First, the HPV vaccine hesitant community is small and tightly clustered, comprising a homogenous and densely connected community facilitating the flow of information and socialization of its members to group norms . In contrast, the HPV vaccine confident community is larger, but also not as tightly grouped, consisting of three distinct and somewhat separated sub-communities. Both communities can be seen in Fig. and are highly polarised, with few connections existing between them. This also reflects the limited extent to which information can traverse between the communities and indicates that most of the information that users in each community are exposed to originates from within their own community. While this has the effect of insulating the vaccine confident accounts from potential misinformation originating from the vaccine hesitant space, it also means that there are limited opportunities for organic exchange of ideas to occur between networks. Thus, in this network structure, it is less likely that accurate information will be disseminated to the vaccine hesitant space from vaccine confident influencers. Our results also indicate that there are few individuals or groups who act as bridges between these polarized groups. Therefore, from a public health perspective, there is a need to use other strategies to reach vaccine hesitant groups, and much of the current recommendations have focused on training public health experts or health professionals to address misinformation in online forums using plain-language communication strategies . Yet, more research is still needed on how to effectively bridge the communication divide between vaccine confident and vaccine hesitant groups. Sentiment and thematic clustering The types of sentiments expressed by the two communities exhibit substantial variance. As shown in Fig. , the HPV vaccine hesitant discourse is largely composed of negative sentiments. This appears to stem from the vaccine hesitant community’s inherently unfavourable view of vaccination, be it regarding the safety of the vaccine itself or the intentions of those promoting it. In contrast, the HPV vaccine confident discourse primarily consists of neutral sentiments. This may be a result of this community often engaging in more dispassionate discussions about empirical evidence and academic studies surrounding the vaccine and its outcomes, disseminating media coverage of related stories without offering comment of their own, or simply sharing scientific evidence and information without adding any specific message or interpretation. With respect to thematic clustering, there were both similarities and differences between the vaccine hesitant and confident communities. First, prominent themes in both communities (vaccine safety for the hesitant and health outcomes for the confident) are concerned with the potential health effects or consequences associated with receiving the HPV vaccine. Vaccine efficacy, for its part, remains marginal as a thematic cluster, suggesting that neither group emphasizes, on aggregate, the specific protection against HPV provided by the vaccine, and that focus is, instead, put on other health benefits or side effects. Furthermore, both communities also express mistrust in groups promoting the opposing narratives toward the HPV vaccine, i.e., institutions/elites or anti-vaxxers, again highlighting the deep polarization between these two communities. The predominance of concerns regarding vaccine safety and tweets demonstrating a mistrust in institutions and societal elites reflect similar topics to those prevalent on anti-vaccine websites . HPV vaccine hesitant and confident communities also significantly differ in other manners and the discussion of consequences of HPV vaccination within the hesitant group tends to focus on perceptions the vaccine is unsafe. A large amount of discourse is focused on Merck, purporting that their studies contain contradictions, that the science backing up their claims is unsound, and highlighting ongoing lawsuits involving the Merck-produced Gardasil vaccine. In addition, members of the hesitant community frequently share stories about specific cases of adverse or lethal reactions to the HPV vaccine. In this respect, there is a greater focus on individual stories and anecdotes as opposed to sharing studies on a group of statistically significant size. There is also discussion about both acute and chronic side effects of the Gardasil vaccine, including swelling, epilepsy-like conditions, and negative mental health outcomes. Blame for these side effects is often placed on supposedly harmful chemicals or compounds found within the vaccine, including aluminum. Overall, the discourse surrounding the HPV vaccine in the vaccine hesitant community reflects concerns with severe side effects and the potential of the vaccine to harm recipients. In contrast, within the vaccine confident group, discussions on consequences of HPV vaccination tends to focus on the benefits of the vaccine. They point to positive outcomes for women’s health, improvements to sexual health because of the uptake of a vaccine for a common STI, and general positive outcomes associated with receiving it. There is also a larger focus in this community on studies on the health effects of larger groups or populations, with proponents sharing studies conducted in jurisdictions where the HPV vaccine is widely available. Such studies indicate a decrease in the rates of cervical, throat, and anal cancers associated with HPV infections since the vaccine has become available. These studies further express hope that increased uptake of the vaccine could virtually eliminate several forms of cancer in the near future. Accordingly, vaccine confident individuals are likely to be positive about the benefits associated with the HPV vaccine. Within the vaccine confident community, there is also discussion around vaccination campaigns to encourage people to get the HPV vaccine. World Cancer Day is often mentioned as an example of an opportunity to advocate for greater global levels of vaccine uptake. Discussions of offering the vaccine for free to boys, in addition to girls, are also present, pointing to increased effectiveness of the vaccine with greater population uptake, as well as positive health outcomes for males. The Irish HPV vaccine advocate Laura Brennan is also mentioned often in the vaccine confident network. Hers is one of the few individual stories popular amongst the vaccine confident community, surrounding her campaign for increased HPV vaccination after personally receiving a terminal cervical cancer diagnosis in 2017 . There is, additionally, a negative discussion amongst the vaccine confident community surrounding the lack of access to the HPV vaccine due to global vaccine shortages for certain populations, namely in developing countries. Authors have highlighted issues around global equity as developed nations expand their vaccination programs to boys while developing nations do not have enough vaccine supply to vaccinate girls . Within this discussion of global vaccine equity, there are success stories shared. For example, Kenya is framed as a nation that has implemented free vaccine distribution with exhortations to Kenyan parents to take advantage of this access . Rwanda and Nigeria are often mentioned as nations that are capable of following Kenya’s lead in providing free vaccine access. Among the vaccine confident, these examples reflect the potential to combat cervical cancer globally, if vaccine supply meets demand. Mistrust from both communities also exists as a theme, but such mistrust is targeted at different groups. Vaccine hesitant groups tend to mistrust institutions and elites, calling into question their motivations and intentions behind encouraging or mandating vaccination . As such, if individuals believe these groups are incompetent or malicious, they may not trust that the stages of vaccine development have been carried out appropriately. Conversely, vaccine confident individuals tend to mistrust anti-vaxxers, accusing them of being ignorant and knowingly or unknowingly spreading misinformation . Vaccine confident individuals are more likely to believe that the development of these vaccines has been carried out safely and competently, so may hold those opposed to what they perceive as a potentially lifesaving vaccine on a large scale in poor regard. They may also be more likely to believe that prominent anti-vaxxers have an inherent malicious or selfish motivation and that followers of them are ignorant or misguided. Both these themes concern the perceived spread of wrong or misleading information. Additionally, within the hesitant community, discussions of vaccine efficacy focus on a purported rise in cervical cancer rates amongst the vaccinated. Within the confident community, the exact opposite is observed in discussions surrounding declines in HPV infections and cervical cancer rates associated with receipt of the HPV vaccine. Yet again, this dichotomy in opinion between the two communities reflects two contrasting realities prevalent in the discourse surrounding the HPV vaccine. Effects of COVID-19 and COVID-19 vaccine rollout Finally, an examination of the changes in sentiments and themes in Figs. and , it can be observed what effects the outbreak of COVID-19 and the early stages of COVID-19 vaccine rollout had on the HPV vaccine confident and hesitant communities. In Fig. , the HPV vaccine hesitant community shows a large uptick in discussion in late 2019 and early 2020, especially in negative sentiment discussions. This corresponds with legislative efforts in New York State to mandate the HPV vaccine for public school students , as well as the WHO declaration of COVID-19 as a Global Health Emergency . By April and May of 2020, the discussions return to levels like those found earlier in 2019. COVID-19, up until May 2021, appeared to have had little aggregated effect on the amount or sentiment of discussion in the HPV vaccine hesitant community on Twitter. This may indicate that the vaccine confident community had temporarily turned their attention away from HPV and were more focused on the ongoing COVID-19 pandemic as COVID-19 became the predominant focus in public health. Today, as a result, as routine vaccination catch-up programs are increasingly implemented, there will likely be a need to invest in health communication efforts to increase awareness about the HPV vaccine and its benefits as public interest and discussion have not been at the forefront of the minds of members of the public, even those individuals positively disposed to HPV vaccines, throughout the pandemic. Future work should consider measuring how the COVID-19 pandemic decreased uptake for vaccination in general and HPV in particular. The prevalence of HPV vaccine themes within the wider discourse, shown in Fig. , shows little change over the course of the COVID-19 pandemic during the time period studied. It seems that the pandemic had little to no effect on the thematic distribution within the discussions of the confident and hesitant communities. This aligns with the findings from Sobeczek, Gujski and Raciborski , who observed an intensification of the HPV vaccine discourse on Facebook during the COVID-19 vaccine distribution period but did not observe any changes in sentiment or theme of such online conversations. Therefore, public health professionals working today to craft HPV vaccine promotion messaging will not need to widely shift the focus on their messaging, as public discourse on the HPV vaccine does not appear to have shifted dramatically during the COVID-19 pandemic. Instead, public health professionals may wish to invest in raising awareness and interest in the HPV vaccine in general, given that public interest in COVID-19 and its vaccines have dominated public health conversations that previously included a focus on HPV vaccination. This is particularly important because research evidence from nations with previously high childhood vaccination rates, which include HPV vaccination, are seeing rates plummet in the wake of the COVID-19 pandemic . Figure shows the proportion of tweets mentioning COVID-19 within each community. As expected, there are no mentions of COVID-19 before early 2020. However, after the WHO declaration of a Global Health Emergency, both communities began mentioning COVID-19. These mentions trail off over summer and autumn of 2020, but in late December 2020, COVID-19 mentions pick up again in the HPV vaccine hesitant community. This corresponds with the earliest administrations of COVID-19 vaccines to vulnerable individuals. There are likely considerable numbers of individuals in the HPV vaccine hesitant community who are equally hesitant of the COVID-19 vaccines, and this development may have spurred this uptick in discussion within this community. It is also likely that increased awareness caused by COVID-19 vaccination drew attention to other vaccines less known to the public, accounting for this uptick in new accounts in the HPV vaccine hesitant community, as these individuals previously outside the HPV vaccine hesitant network drew comparisons between Gardasil and COVID-19 vaccines; which they were more familiar with. Mentions among the HPV vaccine confident community did not increase significantly until April of 2021, when the COVID-19 vaccines began a much wider-scale rollout. It is likely that proponents of the HPV vaccine are also in support of the COVID-19 vaccines, and that this increase in discussion corresponds to a greater push for individuals to get vaccinated against COVID-19. While there is still a paucity of literature examining the impact of the COVID-19 pandemic on intention to receive the HPV vaccine, there is some evidence the COVID-19 pandemic has increased parental intention to vaccinate their children against the flu . Thus, as high-income nations have begun to scale-back their COVID-19 vaccination campaigns, there is a need to re-introduce the value of the HPV vaccine. Finally, in Fig. , we observed, first, a slight decrease of the unique vaccine confident Twitter accounts and an increase in unique vaccine hesitant Twitter accounts after the WHO declared COVID-19 as a Global Health Emergency of International Concern. It is likely that public health figures redirected their online attention to socializing individuals to comply with public health mandates and encouraging COVID-19 vaccine uptake. This likely diverted their attention from HPV vaccination and can account for the diminished vaccine confident activity over this period. Second, it seems that the COVID-19 pandemic represented an exogenous event that drew attention to other vaccines that were not previously in the public focus. Indeed, we see an increase by 40% of accounts associated with the HPV vaccine hesitant network during the COVID period. Furthermore, it seems that Twitter’s content moderation strategies did not significantly influence the dissemination of anti-vaccine content in the HPV vaccine hesitant network. On the one hand, we see a consistent growth in both vaccine hesitant narratives and engagement of new accounts in the vaccine hesitant network. On the other hand, a review of the top accounts in that space prior to and during COVID-19 suggest that such measures were limited. One would have expected Twitter to focus their content moderation efforts on the most prolific and influential individuals spreading anti-vaccine misinformation. Our assessment of the most influential accounts in the vaccine hesitant network reveals that Twitter only suspended the accounts of some of the least prominent influential figures and avoided taking on more powerful influencers and super spreaders engaged in a disinformation campaign on HPV vaccination. Looking at the aggregate of the collected tweets, certain characteristics are particularly prominent. First, the HPV vaccine hesitant community is small and tightly clustered, comprising a homogenous and densely connected community facilitating the flow of information and socialization of its members to group norms . In contrast, the HPV vaccine confident community is larger, but also not as tightly grouped, consisting of three distinct and somewhat separated sub-communities. Both communities can be seen in Fig. and are highly polarised, with few connections existing between them. This also reflects the limited extent to which information can traverse between the communities and indicates that most of the information that users in each community are exposed to originates from within their own community. While this has the effect of insulating the vaccine confident accounts from potential misinformation originating from the vaccine hesitant space, it also means that there are limited opportunities for organic exchange of ideas to occur between networks. Thus, in this network structure, it is less likely that accurate information will be disseminated to the vaccine hesitant space from vaccine confident influencers. Our results also indicate that there are few individuals or groups who act as bridges between these polarized groups. Therefore, from a public health perspective, there is a need to use other strategies to reach vaccine hesitant groups, and much of the current recommendations have focused on training public health experts or health professionals to address misinformation in online forums using plain-language communication strategies . Yet, more research is still needed on how to effectively bridge the communication divide between vaccine confident and vaccine hesitant groups. The types of sentiments expressed by the two communities exhibit substantial variance. As shown in Fig. , the HPV vaccine hesitant discourse is largely composed of negative sentiments. This appears to stem from the vaccine hesitant community’s inherently unfavourable view of vaccination, be it regarding the safety of the vaccine itself or the intentions of those promoting it. In contrast, the HPV vaccine confident discourse primarily consists of neutral sentiments. This may be a result of this community often engaging in more dispassionate discussions about empirical evidence and academic studies surrounding the vaccine and its outcomes, disseminating media coverage of related stories without offering comment of their own, or simply sharing scientific evidence and information without adding any specific message or interpretation. With respect to thematic clustering, there were both similarities and differences between the vaccine hesitant and confident communities. First, prominent themes in both communities (vaccine safety for the hesitant and health outcomes for the confident) are concerned with the potential health effects or consequences associated with receiving the HPV vaccine. Vaccine efficacy, for its part, remains marginal as a thematic cluster, suggesting that neither group emphasizes, on aggregate, the specific protection against HPV provided by the vaccine, and that focus is, instead, put on other health benefits or side effects. Furthermore, both communities also express mistrust in groups promoting the opposing narratives toward the HPV vaccine, i.e., institutions/elites or anti-vaxxers, again highlighting the deep polarization between these two communities. The predominance of concerns regarding vaccine safety and tweets demonstrating a mistrust in institutions and societal elites reflect similar topics to those prevalent on anti-vaccine websites . HPV vaccine hesitant and confident communities also significantly differ in other manners and the discussion of consequences of HPV vaccination within the hesitant group tends to focus on perceptions the vaccine is unsafe. A large amount of discourse is focused on Merck, purporting that their studies contain contradictions, that the science backing up their claims is unsound, and highlighting ongoing lawsuits involving the Merck-produced Gardasil vaccine. In addition, members of the hesitant community frequently share stories about specific cases of adverse or lethal reactions to the HPV vaccine. In this respect, there is a greater focus on individual stories and anecdotes as opposed to sharing studies on a group of statistically significant size. There is also discussion about both acute and chronic side effects of the Gardasil vaccine, including swelling, epilepsy-like conditions, and negative mental health outcomes. Blame for these side effects is often placed on supposedly harmful chemicals or compounds found within the vaccine, including aluminum. Overall, the discourse surrounding the HPV vaccine in the vaccine hesitant community reflects concerns with severe side effects and the potential of the vaccine to harm recipients. In contrast, within the vaccine confident group, discussions on consequences of HPV vaccination tends to focus on the benefits of the vaccine. They point to positive outcomes for women’s health, improvements to sexual health because of the uptake of a vaccine for a common STI, and general positive outcomes associated with receiving it. There is also a larger focus in this community on studies on the health effects of larger groups or populations, with proponents sharing studies conducted in jurisdictions where the HPV vaccine is widely available. Such studies indicate a decrease in the rates of cervical, throat, and anal cancers associated with HPV infections since the vaccine has become available. These studies further express hope that increased uptake of the vaccine could virtually eliminate several forms of cancer in the near future. Accordingly, vaccine confident individuals are likely to be positive about the benefits associated with the HPV vaccine. Within the vaccine confident community, there is also discussion around vaccination campaigns to encourage people to get the HPV vaccine. World Cancer Day is often mentioned as an example of an opportunity to advocate for greater global levels of vaccine uptake. Discussions of offering the vaccine for free to boys, in addition to girls, are also present, pointing to increased effectiveness of the vaccine with greater population uptake, as well as positive health outcomes for males. The Irish HPV vaccine advocate Laura Brennan is also mentioned often in the vaccine confident network. Hers is one of the few individual stories popular amongst the vaccine confident community, surrounding her campaign for increased HPV vaccination after personally receiving a terminal cervical cancer diagnosis in 2017 . There is, additionally, a negative discussion amongst the vaccine confident community surrounding the lack of access to the HPV vaccine due to global vaccine shortages for certain populations, namely in developing countries. Authors have highlighted issues around global equity as developed nations expand their vaccination programs to boys while developing nations do not have enough vaccine supply to vaccinate girls . Within this discussion of global vaccine equity, there are success stories shared. For example, Kenya is framed as a nation that has implemented free vaccine distribution with exhortations to Kenyan parents to take advantage of this access . Rwanda and Nigeria are often mentioned as nations that are capable of following Kenya’s lead in providing free vaccine access. Among the vaccine confident, these examples reflect the potential to combat cervical cancer globally, if vaccine supply meets demand. Mistrust from both communities also exists as a theme, but such mistrust is targeted at different groups. Vaccine hesitant groups tend to mistrust institutions and elites, calling into question their motivations and intentions behind encouraging or mandating vaccination . As such, if individuals believe these groups are incompetent or malicious, they may not trust that the stages of vaccine development have been carried out appropriately. Conversely, vaccine confident individuals tend to mistrust anti-vaxxers, accusing them of being ignorant and knowingly or unknowingly spreading misinformation . Vaccine confident individuals are more likely to believe that the development of these vaccines has been carried out safely and competently, so may hold those opposed to what they perceive as a potentially lifesaving vaccine on a large scale in poor regard. They may also be more likely to believe that prominent anti-vaxxers have an inherent malicious or selfish motivation and that followers of them are ignorant or misguided. Both these themes concern the perceived spread of wrong or misleading information. Additionally, within the hesitant community, discussions of vaccine efficacy focus on a purported rise in cervical cancer rates amongst the vaccinated. Within the confident community, the exact opposite is observed in discussions surrounding declines in HPV infections and cervical cancer rates associated with receipt of the HPV vaccine. Yet again, this dichotomy in opinion between the two communities reflects two contrasting realities prevalent in the discourse surrounding the HPV vaccine. Finally, an examination of the changes in sentiments and themes in Figs. and , it can be observed what effects the outbreak of COVID-19 and the early stages of COVID-19 vaccine rollout had on the HPV vaccine confident and hesitant communities. In Fig. , the HPV vaccine hesitant community shows a large uptick in discussion in late 2019 and early 2020, especially in negative sentiment discussions. This corresponds with legislative efforts in New York State to mandate the HPV vaccine for public school students , as well as the WHO declaration of COVID-19 as a Global Health Emergency . By April and May of 2020, the discussions return to levels like those found earlier in 2019. COVID-19, up until May 2021, appeared to have had little aggregated effect on the amount or sentiment of discussion in the HPV vaccine hesitant community on Twitter. This may indicate that the vaccine confident community had temporarily turned their attention away from HPV and were more focused on the ongoing COVID-19 pandemic as COVID-19 became the predominant focus in public health. Today, as a result, as routine vaccination catch-up programs are increasingly implemented, there will likely be a need to invest in health communication efforts to increase awareness about the HPV vaccine and its benefits as public interest and discussion have not been at the forefront of the minds of members of the public, even those individuals positively disposed to HPV vaccines, throughout the pandemic. Future work should consider measuring how the COVID-19 pandemic decreased uptake for vaccination in general and HPV in particular. The prevalence of HPV vaccine themes within the wider discourse, shown in Fig. , shows little change over the course of the COVID-19 pandemic during the time period studied. It seems that the pandemic had little to no effect on the thematic distribution within the discussions of the confident and hesitant communities. This aligns with the findings from Sobeczek, Gujski and Raciborski , who observed an intensification of the HPV vaccine discourse on Facebook during the COVID-19 vaccine distribution period but did not observe any changes in sentiment or theme of such online conversations. Therefore, public health professionals working today to craft HPV vaccine promotion messaging will not need to widely shift the focus on their messaging, as public discourse on the HPV vaccine does not appear to have shifted dramatically during the COVID-19 pandemic. Instead, public health professionals may wish to invest in raising awareness and interest in the HPV vaccine in general, given that public interest in COVID-19 and its vaccines have dominated public health conversations that previously included a focus on HPV vaccination. This is particularly important because research evidence from nations with previously high childhood vaccination rates, which include HPV vaccination, are seeing rates plummet in the wake of the COVID-19 pandemic . Figure shows the proportion of tweets mentioning COVID-19 within each community. As expected, there are no mentions of COVID-19 before early 2020. However, after the WHO declaration of a Global Health Emergency, both communities began mentioning COVID-19. These mentions trail off over summer and autumn of 2020, but in late December 2020, COVID-19 mentions pick up again in the HPV vaccine hesitant community. This corresponds with the earliest administrations of COVID-19 vaccines to vulnerable individuals. There are likely considerable numbers of individuals in the HPV vaccine hesitant community who are equally hesitant of the COVID-19 vaccines, and this development may have spurred this uptick in discussion within this community. It is also likely that increased awareness caused by COVID-19 vaccination drew attention to other vaccines less known to the public, accounting for this uptick in new accounts in the HPV vaccine hesitant community, as these individuals previously outside the HPV vaccine hesitant network drew comparisons between Gardasil and COVID-19 vaccines; which they were more familiar with. Mentions among the HPV vaccine confident community did not increase significantly until April of 2021, when the COVID-19 vaccines began a much wider-scale rollout. It is likely that proponents of the HPV vaccine are also in support of the COVID-19 vaccines, and that this increase in discussion corresponds to a greater push for individuals to get vaccinated against COVID-19. While there is still a paucity of literature examining the impact of the COVID-19 pandemic on intention to receive the HPV vaccine, there is some evidence the COVID-19 pandemic has increased parental intention to vaccinate their children against the flu . Thus, as high-income nations have begun to scale-back their COVID-19 vaccination campaigns, there is a need to re-introduce the value of the HPV vaccine. Finally, in Fig. , we observed, first, a slight decrease of the unique vaccine confident Twitter accounts and an increase in unique vaccine hesitant Twitter accounts after the WHO declared COVID-19 as a Global Health Emergency of International Concern. It is likely that public health figures redirected their online attention to socializing individuals to comply with public health mandates and encouraging COVID-19 vaccine uptake. This likely diverted their attention from HPV vaccination and can account for the diminished vaccine confident activity over this period. Second, it seems that the COVID-19 pandemic represented an exogenous event that drew attention to other vaccines that were not previously in the public focus. Indeed, we see an increase by 40% of accounts associated with the HPV vaccine hesitant network during the COVID period. Furthermore, it seems that Twitter’s content moderation strategies did not significantly influence the dissemination of anti-vaccine content in the HPV vaccine hesitant network. On the one hand, we see a consistent growth in both vaccine hesitant narratives and engagement of new accounts in the vaccine hesitant network. On the other hand, a review of the top accounts in that space prior to and during COVID-19 suggest that such measures were limited. One would have expected Twitter to focus their content moderation efforts on the most prolific and influential individuals spreading anti-vaccine misinformation. Our assessment of the most influential accounts in the vaccine hesitant network reveals that Twitter only suspended the accounts of some of the least prominent influential figures and avoided taking on more powerful influencers and super spreaders engaged in a disinformation campaign on HPV vaccination. To our knowledge, this is the first study using social network analysis and sentiment analysis to examine the impact of the COVID-19 pandemic on sentiments on HPV vaccination among English-language vaccine hesitant and vaccine confident networks on Twitter. The present study has several strengths. First, this study is reinforced by the undertaking of a rapid review of literature that informed the development of the HPV related keywords that were used to gather tweets and re-tweets used for analysis. Second, using network analysis and machine-learning text analysis allowed us to compare specific narratives and sentiment within vaccine hesitant and confident online conversations, thus providing a more nuanced understanding of the underlying frames relied on by these communities. Third, this study allowed us to assess the temporal evolution of discussion of HPV vaccination with reference to other epidemiological events (i.e., the COVID-19 pandemic). There are several limitations in this study that could be addressed in future research. First, in terms of the time period of study, tweets were collected before the COVID-19 pandemic and during the start of the COVID-19 vaccine rollout. During this time, there was a COVID-19 vaccine eligibility requirement and not all individuals were qualified to be vaccinated. Therefore, the COVID-19 mentions in our dataset do not display the complete impact to online conversations and sentiment around the HPV vaccine amidst COVID-19 vaccine rollout. Further research is needed to see whether our findings are reflective of the entire COVID-19 pandemic, which as of writing is ongoing. Second, while Twitter is commonly used to study online social interactions, it does not represent the general population and particularly youth, who are the target population for HPV vaccination campaigns. For this reason, examining these research questions by also collecting data from other social media platforms such as Facebook, Reddit, Instagram, and YouTube, and analyzing HPV conversations from these platforms is an area for future study. Finally, a fruitful area for further work is to investigate the impact of COVID-19 on the dynamics of the vaccine hesitant HPV network. While we have demonstrated the number of unique accounts in the vaccine hesitant HPV network grew over the course of the COVID-19 pandemic, future research should consider how the pandemic influenced the level of polarization or the level of interaction between the vaccine hesitant and vaccine confident networks. Further analysis of such dynamics could highlight whether the vaccine hesitant space has become more or less penetrable by information originating among vaccine confident users, including public health figures. While existing research has shown that increased polarization has resulted in different online contexts over the result of the COVID-19 pandemic , it remains to be seen if such polarization extended to online discussions about HPV immunization. Further, while we have offered a limited analysis of the impact of Twitter’s content moderation policies on the dissemination of vaccine hesitant tweets related to HPV above, a full study of the impact of content moderation on anti-vaccine network structures is a rich and interesting topic deserving of its own paper. Such research would be highly topical, particularly given Elon Musk’s recent takeover of Twitter and ongoing debate around whether Twitter should serve as a “de facto public town square” . An analysis on whether the misinformation environment on Twitter changed when moderation strategies were relaxed would meaningfully contribute to and enrich such debates. With the onset of the COVID-19 pandemic, discussions around the HPV vaccine on Twitter decreased among vaccine confident networks but we did not observe any significant changes in sentiment and themes surrounding the HPV vaccine during the COVID-19 pandemic and distribution of the COVID-19 vaccine. Safety concerns surrounding the HPV vaccine represent the most predominant theme discussed in the HPV vaccine hesitant networks. Further, the COVID-19 pandemic drew new users to engage in the HPV vaccine hesitant space. Therefore, as public health practitioners prioritize vaccine catch-up programs there is a need to raise public consciousness of the HPV vaccine and its benefits. The themes of such a campaign should prioritize a refutation of safety concerns, which is the primary critique of anti-Gardasil influencers.
|
Disparity in access for people with disabilities to outpatient dental care services: a retrospective cohort study
|
f08d59b2-b865-4b6b-8e51-64befe0161b7
|
10102694
|
Dental[mh]
|
People with disabilities have more difficulty accessing healthcare services than do those without disabilities, because of mobility restrictions, social discrimination, and lower income due to disability . Maltais et al. reported that people with intellectual disabilities used optometry, physiotherapy, and Pap tests significantly less often than did people without disabilities. Rouleau et al. also reported that 16.6% of the patients experienced difficulties in receiving dental treatment after they became disabled. In addition, previous studies on the oral health status of disabled people found that the prevalence of edentulous tooth loss and dental caries was higher among people with disabilities than in those without disabilities . Furthermore, the more severely disabled the patient, the greater the number of missing teeth and the lower the number of restored teeth . The primary purpose of dental care use among people with disabilities was related to pain management, rather than oral disease prevention or regular checkups . Income, education level, place of residence, demographic characteristics of caregivers, type of disability, and severity of disability affected the use of dental care by people with disabilities . A regular source of dental care (RSDC) is a factor that influences an individual's use of health services . RSDC are related to receiving oral health services and have a positive effect on the oral health management of not only themselves but also their children . According to previous studies, there was a difference in having RSDC according to social disadvantages, such as income, education, occupation, medical insurance, age, marital status, and subjective health status [ – ]. However, despite many studies reporting a positive relationship between RSDC and medical use, there are few studies examining the effect of RSDC on healthcare use of people with disabilities. In addition, research focusing on RSDC was conducted before 2010 in dentistry, but recent research often approaches it in terms of regular visits. The findings of previous studies on dental care use of disabled people were slightly different. A Korean study reported that disabled people use dental care 0.97 times less than non-disabled people . However, other studies found that the dental service use rates of non-disabled and disabled people were similar or that people with disabilities used more dental services . To date, no studies have reported whether RSDC affects the frequency or cost of dental care over a long period. This study aimed to investigate the effect of RSDC on the use of dental care by people with disabilities using repeated measured claim data.
Research materials and participants This retrospective study was approved by the Institutional Review Board of the Ethics and Scientific Review Committee of Wonkwang University (WKIRB-201911-SB-082) and performed in accordance with the Declaration of Helsinki. The Ethics committee waived the requirement of informed consent due to the retrospective study design and anonymity of the NHI claims data. Data management was conducted after receiving approval to use the database of the NHI Service and performed using a computer installed at a location with restricted external access The NHS DBs are relational databases, and variables were extracted by merging three DBs using the primary key; DB containing the information of the insured, the DB summarizing the treatments, and the DB containing the detailed treatment details. Data cleaning and management were performed by the authors and organized using the R 4.03 version (R Foundation for Statistical Computing, Vienna, Austria). This study used cohort data (2002–2018) constructed from claims data of the Korean National Health Insurance (NHI). The NHI claims databases used in this study included data on healthcare use, detailed treatments, and sociodemographic information of the individuals. Sociodemographic data contained information on age, sex, region, income-based premiums, insurance type, and the type and grade of disability . For analysis, the NHI data were organized in a cohort format. For example, NHI dental care users in 2002 were followed up until 2018 in a cohort format. Similarly, new dental users who did not overlap with previous years were added and were followed up to 2018. Overall, this study included 7,896,251 dental care users. The dependent variables were the number of annual dental care visits and expenditure per visit. Dental expenses refer to the total expenditures, including the insurer’s contribution and the insured’s out-of-pocket expenses. The Korean Welfare Act for Persons with Disabilities classifies persons with disabilities into 17 categories, and the severity of their disability is organized into six levels. For independent variables, grades 1, 2, and 3 were classified as severe disability, whereas grades 4, 5, and 6 were categorized as mild disability. A regular source of dental care was defined as continuous dental use for at least 2 years or more . The age groups were categorized into children under 20 years, adults aged 20–64 years, and older individuals aged 65 years or older. Income levels were estimated based on the NHI premium (high, middle, and low), and residential areas were categorized into large cities, small cities, and rural areas. In addition, other variables, including medical aid and sex, were used as independent variables. Statistical analysis Descriptive statistics were performed on the number of annual dental visits and dental expenses per visit using the chi-square test and one-way analysis of variance. The Scheffe post-hoc test was performed for multiple pairwise comparisons. The association between the dependent variable (the number of annual dental visits and dental expenses per visit) and availability of a regular source of care in each disability group were analyzed using generalized estimating equations (GEEs). GEEs are commonly used for analysis that considers intra-individual correlation due to repeated measures, not only in medicine and life sciences , but also in dentistry . Notably, the GEE method calculates the population-averaged estimation on the premise that the independence assumption is violated because of the existence of a correlation between the residuals in linear regression analysis . In our GEE analysis, male individuals, aged ≥ 65 years, living in large cities, with no regular source of dental care, non-medical aid, a low income, without disabilities were used as the reference group for analysis. Additionally, interaction effects were considered to determine the difference between the number of annual dental visits and dental expenses per visit based on a regular source of dental care and the severity of the disability. All analyses were performed using the R 4.03 version (R Foundation for Statistical Computing, Vienna, Austria) .
This retrospective study was approved by the Institutional Review Board of the Ethics and Scientific Review Committee of Wonkwang University (WKIRB-201911-SB-082) and performed in accordance with the Declaration of Helsinki. The Ethics committee waived the requirement of informed consent due to the retrospective study design and anonymity of the NHI claims data. Data management was conducted after receiving approval to use the database of the NHI Service and performed using a computer installed at a location with restricted external access The NHS DBs are relational databases, and variables were extracted by merging three DBs using the primary key; DB containing the information of the insured, the DB summarizing the treatments, and the DB containing the detailed treatment details. Data cleaning and management were performed by the authors and organized using the R 4.03 version (R Foundation for Statistical Computing, Vienna, Austria). This study used cohort data (2002–2018) constructed from claims data of the Korean National Health Insurance (NHI). The NHI claims databases used in this study included data on healthcare use, detailed treatments, and sociodemographic information of the individuals. Sociodemographic data contained information on age, sex, region, income-based premiums, insurance type, and the type and grade of disability . For analysis, the NHI data were organized in a cohort format. For example, NHI dental care users in 2002 were followed up until 2018 in a cohort format. Similarly, new dental users who did not overlap with previous years were added and were followed up to 2018. Overall, this study included 7,896,251 dental care users. The dependent variables were the number of annual dental care visits and expenditure per visit. Dental expenses refer to the total expenditures, including the insurer’s contribution and the insured’s out-of-pocket expenses. The Korean Welfare Act for Persons with Disabilities classifies persons with disabilities into 17 categories, and the severity of their disability is organized into six levels. For independent variables, grades 1, 2, and 3 were classified as severe disability, whereas grades 4, 5, and 6 were categorized as mild disability. A regular source of dental care was defined as continuous dental use for at least 2 years or more . The age groups were categorized into children under 20 years, adults aged 20–64 years, and older individuals aged 65 years or older. Income levels were estimated based on the NHI premium (high, middle, and low), and residential areas were categorized into large cities, small cities, and rural areas. In addition, other variables, including medical aid and sex, were used as independent variables.
Descriptive statistics were performed on the number of annual dental visits and dental expenses per visit using the chi-square test and one-way analysis of variance. The Scheffe post-hoc test was performed for multiple pairwise comparisons. The association between the dependent variable (the number of annual dental visits and dental expenses per visit) and availability of a regular source of care in each disability group were analyzed using generalized estimating equations (GEEs). GEEs are commonly used for analysis that considers intra-individual correlation due to repeated measures, not only in medicine and life sciences , but also in dentistry . Notably, the GEE method calculates the population-averaged estimation on the premise that the independence assumption is violated because of the existence of a correlation between the residuals in linear regression analysis . In our GEE analysis, male individuals, aged ≥ 65 years, living in large cities, with no regular source of dental care, non-medical aid, a low income, without disabilities were used as the reference group for analysis. Additionally, interaction effects were considered to determine the difference between the number of annual dental visits and dental expenses per visit based on a regular source of dental care and the severity of the disability. All analyses were performed using the R 4.03 version (R Foundation for Statistical Computing, Vienna, Austria) .
Sample characteristics The number of annual dental visits were in the following order: severely disability, mild disability, and no disability, whereas the dental expenses per visit were in the order of mild disability, no disability, and severe disability. Among people with disabilities, the proportion of men using dental services was approximately twice as high as that of females. More people with disabilities had an RSDC than did those with no disability. In the no-disability group, those with an RSDC was approximately 22% of those without a regular source of dental care, whereas the ratio was approximately 28% in the people with disabilities group. The proportion of medical aid users was highest among people with severe disabilities, which was approximately 13 times higher than that among people with no disability, and three times higher than that among people with mild disability. The income level distribution was similar to the medical aid distribution, and the lowest income group accounted for the largest portion of the people with severe disability group (Table ). Number of annual dental visits and expenses per visit The average annual number of dental visits was higher among males with disability and tended to decrease with increasing age in all three groups. An RSDC had a greater impact on dental use than did other variables, such as sex and age. Participants with an RSDC had 1.64 times more dental visits than did those without an RSDC. This trend was similar for both people with and those without disabilities. For example, in the absence of an RSDC, the average number of annual dental visits of people with severe disability was 2.27, which was about 15% higher than that among those without disabilities, but increased by 1.73 times when an RSDC was available. Women had fewer dental visits than men; however, they spent more on dental care per visit. Women with severe disabilities had the highest dental expenses per visit (Fig. ). As age increased, dental expenses per visit tended to decrease, and the number of dental visits in the group without an RSDC was low. However, group without a RSDC dental care expenses per visit were higher than those of their counterparts. Effect of a regular source of dental care on the annual number of dental visits and expenses According to the GEE analysis, there was no difference in the average dental care use between men and women regardless of the statistical significance (the annual number of dental visits in males was only 4% higher than that in females.) Individuals 65 years and older had the lowest dental care use, whereas the group under 20 years old and the 20–64-year-old group had 51% and 26% higher dental care use, respectively, than the older group. In the no-disability group, the simple effect of an RSDC was 13.2%. In other words, when individuals had an RSDC, they would have more than a quarter (= 0.132*2.25 [average dental visits]) of the number of dental visits than those without an RSDC. The income and medical aid trends were similar. The higher the income level, the lower was the annual dental care use of individuals with no medical aid (Table ). The effects of an RSDC varied according to the severity of disability (Fig. ). The interaction term for individuals with mild disability was not statistically significant; however, it was a marginally significant for those with severe disability. Considering the interaction effect between the severity of disability and availability of an RSDC, the simple effect of RSDC was approximately 21.5% (= exp [0.124 + 0.072]) for individuals with severe disability. This was greater than the simple effects among those with mild disability or with no disability, which were 12.3% and 13.2%, respectively. In terms of dental expenses per visit (Table ), older individuals spent the least, but those aged < 20 years spent more and had more dental visits than did those in other age groups. Individuals aged < 20 years tended to spend 95% more than did the older individuals. Results using an interaction term are presented as a plot (Fig. ) to examine whether the availability of an RSDC made a difference to the dental expenses per visit according to the severity of the disability. Each point shown in the plot is a predicted value. The slope of the line between the two points represents a simple effect. In Fig. , the slopes for those with mild and severe disability were significantly larger than that for those with no disability, implying that using an RSDC would significantly reduce the dental expenses per visit for people with disabilities.
The number of annual dental visits were in the following order: severely disability, mild disability, and no disability, whereas the dental expenses per visit were in the order of mild disability, no disability, and severe disability. Among people with disabilities, the proportion of men using dental services was approximately twice as high as that of females. More people with disabilities had an RSDC than did those with no disability. In the no-disability group, those with an RSDC was approximately 22% of those without a regular source of dental care, whereas the ratio was approximately 28% in the people with disabilities group. The proportion of medical aid users was highest among people with severe disabilities, which was approximately 13 times higher than that among people with no disability, and three times higher than that among people with mild disability. The income level distribution was similar to the medical aid distribution, and the lowest income group accounted for the largest portion of the people with severe disability group (Table ).
The average annual number of dental visits was higher among males with disability and tended to decrease with increasing age in all three groups. An RSDC had a greater impact on dental use than did other variables, such as sex and age. Participants with an RSDC had 1.64 times more dental visits than did those without an RSDC. This trend was similar for both people with and those without disabilities. For example, in the absence of an RSDC, the average number of annual dental visits of people with severe disability was 2.27, which was about 15% higher than that among those without disabilities, but increased by 1.73 times when an RSDC was available. Women had fewer dental visits than men; however, they spent more on dental care per visit. Women with severe disabilities had the highest dental expenses per visit (Fig. ). As age increased, dental expenses per visit tended to decrease, and the number of dental visits in the group without an RSDC was low. However, group without a RSDC dental care expenses per visit were higher than those of their counterparts.
According to the GEE analysis, there was no difference in the average dental care use between men and women regardless of the statistical significance (the annual number of dental visits in males was only 4% higher than that in females.) Individuals 65 years and older had the lowest dental care use, whereas the group under 20 years old and the 20–64-year-old group had 51% and 26% higher dental care use, respectively, than the older group. In the no-disability group, the simple effect of an RSDC was 13.2%. In other words, when individuals had an RSDC, they would have more than a quarter (= 0.132*2.25 [average dental visits]) of the number of dental visits than those without an RSDC. The income and medical aid trends were similar. The higher the income level, the lower was the annual dental care use of individuals with no medical aid (Table ). The effects of an RSDC varied according to the severity of disability (Fig. ). The interaction term for individuals with mild disability was not statistically significant; however, it was a marginally significant for those with severe disability. Considering the interaction effect between the severity of disability and availability of an RSDC, the simple effect of RSDC was approximately 21.5% (= exp [0.124 + 0.072]) for individuals with severe disability. This was greater than the simple effects among those with mild disability or with no disability, which were 12.3% and 13.2%, respectively. In terms of dental expenses per visit (Table ), older individuals spent the least, but those aged < 20 years spent more and had more dental visits than did those in other age groups. Individuals aged < 20 years tended to spend 95% more than did the older individuals. Results using an interaction term are presented as a plot (Fig. ) to examine whether the availability of an RSDC made a difference to the dental expenses per visit according to the severity of the disability. Each point shown in the plot is a predicted value. The slope of the line between the two points represents a simple effect. In Fig. , the slopes for those with mild and severe disability were significantly larger than that for those with no disability, implying that using an RSDC would significantly reduce the dental expenses per visit for people with disabilities.
This study analyzed the number of dental visits and expenses of people with disabilities using the 2002–2018 NHI claim data from South Korea and investigated their relationship with a regular source of care. We found that those with an RSDC visited dentists more often and that the effects of having an RSDC varied according to the severity of disability. Women and older individuals with disability used dental care less and spent more per dental visit than did their counterparts. Physical barriers, care costs, and dental fear are common obstacles to dental care among people with disabilities . People with disabilities had more decayed–missing–filled teeth, decayed teeth, and missing teeth than did those with no disabilities, resulting in poor oral health . Lin et al. reported that the dental service access rate was lower in people with disabilities than in the general population, but people with disabilities also had a higher dental filling rate, and periodontal treatment rate. A Brazilian study found that the proportion of disabled people using dental care was lower than that of people with no disability . Park et al. reported that the number of dental care users with disabilities was lower than that that of individuals with no disability, but the frequency was higher in those with disabilities than in those without disabilities. The annual number of dental visits in this study was 2.61 and 2.63 for people with mild disabilities and with severe disabilities, respectively, which was more than the 2.23 visits of people with no disability (Table ). For people with disabilities, a retreatment visit might be necessary because the treated teeth could not be maintained in a healthy state due to the lack of preventive care and difficulties in oral hygiene management. Furthermore, people with disabilities might need additional visits because of difficulty in giving cooperation during the dental visit . In South Korea, access to most dental services, other than orthodontic and prosthodontic services, is less limited because the Korean NHI is available to all citizens, and out-of-pocket costs are lower for people with disabilities and for low income groups . Based on the above-mentioned evidence, it was presumed that the frequency of dental care use of people with disabilities was higher than that of those with no disability. The present study confirmed the inequality in dental care use between the sexes in people with disabilities. Among the NHI beneficiaries, the proportion of men with disabilities was approximately 10% higher than that of women with disabilities (data not presented), whereas the proportion of dental care users among men with disabilities was approximately twice that of women with disabilities. The average annual number of dental visits was higher among men with disabilities (Fig. ). Gender inequality for people with disabilities has often been reported in terms of use of healthcare services . In this study, women with disabilities visited dentists less frequently than did men with disabilities. Women with disabilities have been reported to have a lower level of education, a tendency to be poorer and to have lower employment prospects than men with disabilities . The economic status of people with disabilities is related to the ability to pay for dental expenses and might act as one of major factors determining the number of dental visits as compared to that of individuals with no disability. A previous study reported that approximately 60%–80% of people with disabilities had economic reasons for unmet dental needs. This proportion was higher than that of people without disabilities. In this study, people with disabilities with a high-income level had a higher annual number of dental visits. The difference in the annual number of dental visits according to the income level of people with disabilities was slightly larger than the difference in the no disability group. We speculate that women with disabilities might have a greater burden of dental visits due to paying for dental treatment than men with disabilities. People with disabilities need a guardian or caregiver because of difficulties in mobility and appropriate communication when accessing dental care . In South Korea, women with disabilities are reported to receive less obtaining care support than men with disabilities. A previous study found that the average number of days of care per month was greater for men than for women with disabilities. In a study in the United States , women with disabilities received less home-care support than did men with disabilities. This suggests that gender inequality exists in the social support of people with disabilities. Among people with disabilities, the average annual number of dental visits was decreased significantly in those aged 65 years and over, and the cost of dental treatments also tended to decrease by age groups (Fig. ), which was consistent with the findings of previous study . However, as age increases, the need for oral treatment tends to increase, due to chewing and swallowing difficulties , and dry mouth , among other causes. In addition, activity restrictions and economic factors hinder the use of dental care. In a previous study, older people with restrictions in daily activities used dental care less often than those with restrictions in instrumental activities of daily living . The oral health of older individuals is influenced by physical function and cognitive impairment . For older individuals in South Korea, health insurance benefits include dentures and implants services, as well as conventional oral treatments, including oral surgery . Nonetheless, our study results showed that these Korean NHI benefit package was still not sufficient to reduce inequality in dental care among older people with disabilities. In this study, an RSDC had a greater effect on dental treatment use than other factors, including sex and age. Considering the interaction effect between the severity of disability and RSDC in this study, the number of annual dental visits for people with severe disabilities with an RSDC was increased by 22%, i.e., it was 1.83 times higher than that of people without disabilities. However, the dental costs per visit of people with disabilities with an RSDC were reduced, and the effect was more marked in those with severe disabilities. Regular dental visits positively affects active oral health behaviors, including repeated dental visits, periodic preventive measures, and participation in oral health education . In a previous study, regular dental visits were beneficial for the early detection and treatment of oral diseases and were associated with reducing the incidence of periodontitis . A study of people aged ≥ 65 years reported that regular dental visits reduced the risk of severe disability . In this regard, an RSDC will greatly contribute to regular dental visits of people with disabilities who are more vulnerable to poor oral health due to difficulties in communication and limited physical movement. In terms of the availability of RSDC among people with disabilities, caregivers' awareness of the importance of RSDC, caregivers' active attitude toward oral health care, and policies that encourage dentists to actively perform dental treatment for people with disabilities seem to be necessary (e.g., dental care facility expansion, care incentives for providers, and improving awareness of health disparities among people with disabilities) . According to a previous study that investigated the barriers to oral health among people with disabilities, 54% of dentists reported they would not treat people with cognitive impairment and a poor ability to collaborate during treatment, and 50% of dentists who treated people with cognitive impairment reported that they did not include such patients in follow-up. As the importance of regular dental care for people with disabilities has been established, the need for active use of teledentistry has been emphasized . In particular, during the COVID-19 pandemic, teledentistry reduced the risk of cross-contamination, enabled regular examinations, and helped reduce the occurrence of emergencies [49.50]. Thus, teledentistry has been proposed as an efficient and effective way for people with mobility restrictions or social barriers, such as people with disabilities, to regularly evaluate their oral health without visiting the dentist . This study had some limitations. This study analyzed data of people who had used dental care services; thus, caution is needed in interpreting the results. In this study, the number of annual dental visits was higher in people with disabilities than in those with no disability, but the analysis was based only on people with disabilities who used dental care. Therefore, our finding does not reflect the difference in dental care use among the entire population. In future studies, it is necessary to examine the effects of an RSDC and inequality in dental care among all people with disabilities, including those who do not use dental care. Additionally, caregivers significantly affected the dental visits of people with disabilities; however, the study did not investigate caregivers’ factors because of the limitations of the NHI data. Nevertheless, this study was significant in that it analyzed the dental care use of people with disabilities using repeatedly measured, long-term Korean NHI data and confirmed the positive effect of an RSDC.
This study showed that patients with an RSCD had more annual dental visits and lower dental expenses. For older individuals, despite the increased dental needs associated with age, the number of dental visits and associated expenses were low. A similar trend was also observed in women with disability. For those with severe disability, an RSDC was effective in increasing the number of dental visits and reducing dental expenses. Based on this study’s findings, policies that can help to provide an RSDC for people with disabilities and to resolve inequality in dental services for women and older individuals with disabilities would result in significant improvement in the use of dental care.
|
Case Studies for Overcoming Challenges in Using Big Data in Cancer
|
894fd5d8-8b62-4a5a-ab14-4d70fe9302d7
|
10102839
|
Internal Medicine[mh]
|
Vast amounts of biological and clinical data are being created to provide the research material needed to develop more effective cancer treatments and patient management. In our first paper , we described current and evolving principles for managing these data successfully. They need to be organized, shared, integrated, and made readily accessible . Our first report highlighted the scope of the challenges associated with each of those steps. It offered an array of existing efforts and opinions aimed at mitigating the respective roadblocks and pain points. This paper focuses on illustrating the successful implementation and challenges of these efforts through cancer-specific use cases from several major data repositories. These select oncology case studies provide various approaches to overcoming the aforementioned data access, quality, and analytic challenges. Each example starts with a description of the effort's purpose, content, and progress and finishes with a discussion of challenges specific to the case studies and lessons learned, summarized in . Integration of multiple data types and access to analytical tools are the critical capabilities of the resources described in this paper. For example, the Genomic Data Commons (GDC) is a component of the NCI Cancer Research Data Commons (CRDC; https://datascience.cancer.gov/data-commons ), which includes multiple sets of curated clinical genomics data, as well as imaging, proteomics, and associated metadata and will soon incorporate digital pathology and multispectral data from the Human Tumor Atlas Network (HTAN; https://humantumoratlas.org ).
CancerLinQ ( https://www.cancerlinq.org/ ) is a health technology platform developed and implemented by CancerLinQ LLC, a wholly owned nonprofit subsidiary of the American Society of Clinical Oncology (ASCO). Since its founding in 2014, the CancerLinQ network platform has grown to include more than 100 participating healthcare organizations and oncology practices in the United States. As of December 2021, its database contained more than 6 million total patients, more than 2 million of which have a primary or secondary diagnosis of a malignant neoplasm. The CancerLinQ mission is to empower the oncology community to improve quality of care and patient outcomes through transformational data analytics. CancerLinQ collects comprehensive longitudinal clinical data, both structured and unstructured, from a wide variety of electronic health records (EHR) and other source systems, aggregates the data, then harmonizes, normalizes, and curates them to conform to a Common Data Model (CDM) to support queries. The data are delivered back to the contributing practices as dashboards, reports, and a suite of electronic clinical quality measures, to be used for quality improvement and clinical care. Additionally, the aggregated data undergo software-based Health Insurance Portability and Accountability Act-compliant deidentification and can then be used for data exploration and insights by practices and for discovery by the broader oncology community, including academic researchers, nonprofits/government agencies, and life sciences companies . Lessons learned Operationalizing collection, aggregation, and normalization of massive amounts of real-world oncology data (RWD) at scale requires a highly flexible cloud-based technology stack and an automated data pipeline to enable frequent updates to the aggregated data set. Additionally, the ability to integrate with all the leading information systems is critical for broad adoption, as is an open platform for developing a broad range of third-party applications. Integration into the oncology practice workflow is essential for physician acceptance, and there should be minimal additional data capture required of the healthcare provider. Solutions that can be used for quality reporting for value-based care arrangements or solving other practice challenges that affect revenue cycles can be enormously attractive to business owners tracking return on investment. Encouraging better data hygiene in physician documentation practices and greater reliance on structured data input can be a delicate negotiation with busy clinicians, who tend to have a high comfort level with dictation and free text. Wider adoption of standard data specifications like Minimal Common Oncology Data Elements (mCODE) within the EHR itself can eliminate some of this burden. The learning health system requires data beyond primary clinical phenotypic data from EHRs. Notable gaps include structured molecular data [predominantly somatic next-generation sequencing (NGS) reports], insurance claims data, prescription refill data, and patient-reported outcomes data. Inclusion of digital histopathology data and Digital Imaging and Communications in Medicine (DICOM) data can potentially enable the widespread application of artificial intelligence (AI)/machine-learning (ML) technologies to extract even greater information and utility. The recently published final rules on interoperability from the Centers for Medicare and Medicaid Services and the Office of the National Coordinator for Health Information Technology should significantly improve overall cancer data interoperability ( https://www.healthit.gov/isa/united-states-core-data-interoperability-uscdi ). The requirement for application programming interfaces (API) to use the modern Fast Healthcare Interoperability Resources (FHIR) standard ( https://hl7.org/FHIR/ ) should be transformational for data exchange and the ability of patients to access their own data.
Operationalizing collection, aggregation, and normalization of massive amounts of real-world oncology data (RWD) at scale requires a highly flexible cloud-based technology stack and an automated data pipeline to enable frequent updates to the aggregated data set. Additionally, the ability to integrate with all the leading information systems is critical for broad adoption, as is an open platform for developing a broad range of third-party applications. Integration into the oncology practice workflow is essential for physician acceptance, and there should be minimal additional data capture required of the healthcare provider. Solutions that can be used for quality reporting for value-based care arrangements or solving other practice challenges that affect revenue cycles can be enormously attractive to business owners tracking return on investment. Encouraging better data hygiene in physician documentation practices and greater reliance on structured data input can be a delicate negotiation with busy clinicians, who tend to have a high comfort level with dictation and free text. Wider adoption of standard data specifications like Minimal Common Oncology Data Elements (mCODE) within the EHR itself can eliminate some of this burden. The learning health system requires data beyond primary clinical phenotypic data from EHRs. Notable gaps include structured molecular data [predominantly somatic next-generation sequencing (NGS) reports], insurance claims data, prescription refill data, and patient-reported outcomes data. Inclusion of digital histopathology data and Digital Imaging and Communications in Medicine (DICOM) data can potentially enable the widespread application of artificial intelligence (AI)/machine-learning (ML) technologies to extract even greater information and utility. The recently published final rules on interoperability from the Centers for Medicare and Medicaid Services and the Office of the National Coordinator for Health Information Technology should significantly improve overall cancer data interoperability ( https://www.healthit.gov/isa/united-states-core-data-interoperability-uscdi ). The requirement for application programming interfaces (API) to use the modern Fast Healthcare Interoperability Resources (FHIR) standard ( https://hl7.org/FHIR/ ) should be transformational for data exchange and the ability of patients to access their own data.
American Association for Cancer Research (AACR) Project Genomics Evidence Neoplasia Information Exchange (GENIE) is an international pan-cancer clinical-genomic registry of RWD assembled by sharing data between 19 leading academic cancer centers in an active consortium . The primary goal of the registry is to improve clinical decision-making, particularly in the case of rare cancers and rare variants in common cancers , by collecting data from nearly every patient sequenced at participating institutions. Through the efforts of Sage Bionetworks ( https://sagebionetworks.org/ ) and cBioPortal for Cancer Genomics ( https://genie.cbioportal.org/ ), the registry aggregates, harmonizes, and links clinical-grade NGS data with clinical outcomes obtained during routine medical practice from cancer patients treated at participating institutions. The consortium and its activities are driven by openness, transparency, and inclusion, ensuring that project output remains accessible to the global cancer research community for patient benefit. Details of project governance, operations, and participant sentiments have been described previously . The tenth public data release occurred in July 2021 and contained the clinical-genomic sequencing results of 120,953 samples from 111,222 patients. As of December 2021, nearly 10,000 individuals had registered to use the data, and more than 500 papers had cited the registry. The first 4 public data releases are also available for analysis within the NCI GDC ( https://gdc.cancer.gov/about-gdc/contributed-genomic-data-cancer-research/genie ). At the outset of the project, it was decided to harmonize existing data from each participating institution instead of agreeing to a common platform/methodology and prospectively collecting data. A significant advantage of harmonizing existing data is that you can rapidly generate large data sets. For example, the GENIE registry crossed the 100,000-patient mark in 4 short years. However, this approach has trade-offs, including some missingness across the data set; for example, if an institution does not assay for a particular gene. Additionally, there are complexities involved in harmonization and quality control (QC) for hundreds of different data sources. For example, the 10.0 public release includes data from 92 different sequencing panels and covers 1,348 unique genes. The harmonization process begins with file preparation at each institution, where files to be transferred are mapped to prespecified formats. Generally, data transfers contain all high-quality, somatic calls, including variants of unknown significance, which have been locally reviewed by the institution. During the upload process, files are checked against a file validator by the system, and submitters are notified of issues. Sage Bionetworks processes, harmonizes, filters, and QCs the data monthly before internal release. As highlighted throughout this text, QC is paramount and included at multiple steps throughout the data transfer process. Before initial data transfer, data providers filter any known artifacts, as well as any known germline variants. After harmonization, the data are filtered centrally for the correct date for inclusion, mutations in cis, patient retraction, and potential germline mutations. Finally, before each public release, a dedicated working group manually reviews an entire release looking for data artifacts and inconsistencies. A suite of internal tools, as well as shared algorithms, helps flag potentially problematic institutional data for review when compared with the entirety of a release (e.g., mutation frequencies, demographic distribution, etc.). The output of each review is frequently used to develop additional code and filters to help prescreen subsequent releases to the extent possible. Finally, all prior data are overwritten with each submission, ensuring that the most recent data release is as accurate and current as possible. Archival copies of all prior data releases are kept for reference and to maintain analytic integrity. Patient protection is at the forefront of all processes. Each institution either consents patients for data sharing or provides data through an Institutional Review Board approval or waiver. Data are currently deidentified following Safe Harbor protocols, and all dates are converted to intervals from various anchor dates. Importantly, a simple click-through terms of access was implemented to protect patient identities while making data access as easy as possible ( https://docs.google.com/forms/d/e/1FAIpQLScwlJ9WRmAGZ08CCg8wYo8l8bcUmsAzJ09i1MKjBNtb_dLqIw/viewform ). Additionally, both explicit and implicit patient retraction processes are deployed, allowing for active or passive patient removal, respectively. Finally, internal filters have been developed to remove any potential germline mutations and/or identify single-nucleotide polymorphisms as an additional layer of patient protection. Lessons learned When AACR Project GENIE first launched, the initiative was as much a sociological experiment as it was a clinical research project. It was a coalition of those willing to put aside apprehensions, and it helped catalyze a cultural shift toward sharing and collective work. Quality assurance (QA)/QC is a continuously iterative process with shared responsibility between the data providers and central project administration. Each data release provides insight that is incorporated into the underlying architecture to aid with future releases. Early in the project, QA/QC was left to the last step before a public release, which overly complicated releases and introduced delays. By adopting a monthly internal release schedule, the data quality improved, as did “on-time” deliveries, and the QA/QC process became easier. The consortium has built an extensible operational framework focusing on using existing standards whenever available, with the goal of “future-proofing” the project to the extent possible. Finally, there is a natural temporal lag built into the system. As the amount of data increases and more applications for real-time data use become apparent, the group is looking toward a shift to a fully federated system.
When AACR Project GENIE first launched, the initiative was as much a sociological experiment as it was a clinical research project. It was a coalition of those willing to put aside apprehensions, and it helped catalyze a cultural shift toward sharing and collective work. Quality assurance (QA)/QC is a continuously iterative process with shared responsibility between the data providers and central project administration. Each data release provides insight that is incorporated into the underlying architecture to aid with future releases. Early in the project, QA/QC was left to the last step before a public release, which overly complicated releases and introduced delays. By adopting a monthly internal release schedule, the data quality improved, as did “on-time” deliveries, and the QA/QC process became easier. The consortium has built an extensible operational framework focusing on using existing standards whenever available, with the goal of “future-proofing” the project to the extent possible. Finally, there is a natural temporal lag built into the system. As the amount of data increases and more applications for real-time data use become apparent, the group is looking toward a shift to a fully federated system.
Project Data Sphere (PDS) was established in 2014 to catalyze patient-focused cancer research and accelerate new therapy development. Its open-access digital library laboratory provides a secure platform for researchers to access deidentified patient-level clinical data . Currently, more than 2,600 authorized users on the PDS platform can access 200+ clinical trial data sets representing 200,000+ patients suffering from various tumor types, including breast, colorectal, esophageal, stomach, leukemia, lymphoma, multiple myeloma, ovarian, uterine, and prostate. To ensure researchers can realize the full potential of these data, PDS partnered with SAS Institute, which provides data mining and ML tools within the PDS environment. Research using these data sets has led to more than 135 peer-reviewed publications. Some notable examples include refs. . To ensure success, PDS follows these principles: Uses an open-access model; following a simple and fast user registration, platform users can freely peruse the 200+ data sets. Data on the platform consist of highly annotated late-stage clinical trial data sets that are ideal for data-powered hypothesis testing. Users’ ability to analyze data with SAS data analytic and visualization tools in the cloud or to download raw data files for analysis in their local environments maximizes accessibility and interoperability. The platform can adapt to evolving data opportunities. The platform started hosting solely clinical trial data. In 2020, the capabilities were expanded to host medical imaging, registry, and genomic data. This combination of characteristics has been the driver of a remarkably high ratio of publications to data sets. Lessons learned Despite significant progress, numerous challenges to widespread data sharing and reuse remain. Specifically, various data providers are uncomfortable with providing, or are unable to provide, open access to their data, and prefer gatekeeper models. Reasons for this range from data privacy laws and data life cycle management (generation, release, and update) to the fear of divergent data reanalyses and competitive advantage concerns. An ad hoc exercise is currently required for each research project to navigate discovery, obtain permissions, access, and consolidate data. Our call to action for the research community is to conduct data generation with the expectation of open data sharing at some point in the data life cycle.
Despite significant progress, numerous challenges to widespread data sharing and reuse remain. Specifically, various data providers are uncomfortable with providing, or are unable to provide, open access to their data, and prefer gatekeeper models. Reasons for this range from data privacy laws and data life cycle management (generation, release, and update) to the fear of divergent data reanalyses and competitive advantage concerns. An ad hoc exercise is currently required for each research project to navigate discovery, obtain permissions, access, and consolidate data. Our call to action for the research community is to conduct data generation with the expectation of open data sharing at some point in the data life cycle.
The NCI GDC was launched in 2016 and was one of the first large-scale data-commons. A data commons collocates data with computing infrastructure and software services, tools, and applications to create a data platform for managing, harmonizing, analyzing, and sharing data sets . It is especially valuable for large data sets that can be challenging to manage and analyze without large-scale cloud computing infrastructure. As of January 1, 2022, the GDC contained over 84,000 cases and 3.7 petabytes (PB) of data spanning molecular, image, and clinical data. More than 50,000 researchers use it monthly accessing more than 1 PB of data. One of the challenges faced by the GDC was to develop an architecture that could (i) manage PB-scale genomics data; (ii) support a rich data model containing clinical, phenotypic, biospecimen, and imaging data; and (iii) provide an experience that enabled users to interactively explore the large amounts of data managed by the GDC. The GDC used a cloud-based architecture for this, initially a private cloud hosted at The University of Chicago, and later a hybrid cloud spanning The University of Chicago data center and Amazon Web Services. Through the NCI CRDC, data are made available both in Amazon Web Services and Google Cloud Platform for cloud-based applications. The GDC manages two types of data. The first is object data, such as BAM files, which are identified by persistent globally unique identifiers (GUID). A service translates each GUID into the object's physical location, which may be in multiple locations, either in the on-premises cloud or in a public cloud. This approach allows the data to be moved or replicated without changing any of the code that references the data. The second type is structured data, such as clinical data, phenotype data, or metadata, which is stored in a database. The structured data also include metadata about each of the data objects. In this way, all data in the GDC meet the FAIR standards . Importantly, this architecture has enabled the GDC to scale substantially since its launch, both in number of users and amount of data. Because all the data in the GDC are available through open FAIR APIs , a rich set of applications has been built over the data in the GDC, both by the GDC itself and by third parties. The use of cloud computing also provided the flexibility, scalability, and burst capability required so that the GDC could harmonize all the data submitted to it using a common set of bioinformatics pipelines within a fixed time after data submission. Lessons learned Factors contributing to the wide use of the GDC: Includes data from important and interesting studies, including The Cancer Genome Atlas (TCGA) and Therapeutically Applicable Research to Generate Effective Treatments (TARGET; ref. ). Includes an interactive visualization to explore both molecular and clinical data, with the ability to produce and download publication-quality figures. All data in the GDC, comprising over 68 projects, are harmonized with respect to a CDM (the GDC data model). All data in the GDC are processed with a common uniform set of bioinformatics pipelines, making the data much simpler to understand and analyze . The GDC has an open API with a rich collection of applications built around the API . The GDC's API makes it easy to access data from the GDC in Jupyter Notebook, RStudio, and other API-based applications. Important challenges for systems such as the GDC include: Reducing the time and effort to ingest new data sets. Although the GDC provides an API to upload data, the API requires understanding the GDC data model and transforming data into a format compatible with the API. This can be a time-consuming step, and the GDC has recently developed several tools to help data submitters format data correctly for uploading. There are currently more than 25 bioinformatic pipelines . These pipelines must not only be run over all submitted data in a timely fashion, but also over all the relevant data whenever any of the pipelines are updated. The GDC has developed a large-scale bioinformatics execution service for this purpose called the GDC Pipeline Automation System. Enhancing the functionality of the GDC while operating the GDC and improving its efficiency. As is the case for many large-scale operational systems, each year, the GDC must balance enhancing the system's functionality, adding more projects, refreshing old functionality, updating the technology stack, and improving overall efficiency.
Factors contributing to the wide use of the GDC: Includes data from important and interesting studies, including The Cancer Genome Atlas (TCGA) and Therapeutically Applicable Research to Generate Effective Treatments (TARGET; ref. ). Includes an interactive visualization to explore both molecular and clinical data, with the ability to produce and download publication-quality figures. All data in the GDC, comprising over 68 projects, are harmonized with respect to a CDM (the GDC data model). All data in the GDC are processed with a common uniform set of bioinformatics pipelines, making the data much simpler to understand and analyze . The GDC has an open API with a rich collection of applications built around the API . The GDC's API makes it easy to access data from the GDC in Jupyter Notebook, RStudio, and other API-based applications. Important challenges for systems such as the GDC include: Reducing the time and effort to ingest new data sets. Although the GDC provides an API to upload data, the API requires understanding the GDC data model and transforming data into a format compatible with the API. This can be a time-consuming step, and the GDC has recently developed several tools to help data submitters format data correctly for uploading. There are currently more than 25 bioinformatic pipelines . These pipelines must not only be run over all submitted data in a timely fashion, but also over all the relevant data whenever any of the pipelines are updated. The GDC has developed a large-scale bioinformatics execution service for this purpose called the GDC Pipeline Automation System. Enhancing the functionality of the GDC while operating the GDC and improving its efficiency. As is the case for many large-scale operational systems, each year, the GDC must balance enhancing the system's functionality, adding more projects, refreshing old functionality, updating the technology stack, and improving overall efficiency.
The most extensive integrated healthcare system in the United States, the US Department of Veterans Affairs (VA) Veterans Health Administration (VHA), aggregates a large-scale data repository consisting of various modalities, including clinical, imaging, and genomic data from EHR. Although the primary reason for collecting data is for delivering healthcare to patients, these RWD have also proven essential for supporting QA, cutting-edge research, and other healthcare system needs . The VHA also executes several data-related initiatives or initiatives with a data aggregation component that operate outside of the EHR. These projects include efforts by the Cooperative Studies Program (started in 1972) to capture data for clinical trials and other epidemiologic studies, the Million Veteran Program (beginning in 2011) for collecting germline sequencing data and patient-reported demographics , and the National Precision Oncology Program (NPOP, 2013) for collecting tumor tissue sequencing data for tailoring oncology treatment. With these critical elements in place, VHA has been able to position itself as a pioneer for using data technology to meet the operational challenges of a large healthcare enterprise, as well as to support administrative and research needs. However, establishing successful VA collaborations with outside entities remains challenging, mainly because of the need to maintain veterans’ health data security and privacy. The following sections focus on efforts to share data with external trusted partners and third-party users that are in accordance with all VA regulatory requirements . Different data-sharing configurations will also be compared to show that the way data are shared greatly affects the classes of clinical questions that can be answered. Recent data-sharing efforts Although VHA has carried out many clinical and research data-sharing projects, most involve sharing explicitly defined static data sets specific to the project needs and with particular collaborators. In contrast, recent data-sharing efforts undertaken by VA Boston Healthcare System in the Research for Precision Oncology Program (RePOP) project aim to establish workflows and processes to enable the sharing of longitudinal VA data over more extended periods of time. RePOP, the research component of NPOP, was funded to consent NPOP patients to share their clinical, imaging, and genomic data with researchers to further advancements in cancer care. The overall workflow consists of 4 main technical components: data aggregation, deidentification, formatting, and upload. Aggregation identified relevant cohorts, determined data elements, and linked different modalities by patient identifier. Deidentification was performed using standard methods for external patient identifier generation, date obfuscation (TCGA), and DICOM header stripping for imaging [The Cancer Imaging Archive (TCIA)]. The data were then formatted and bundled according to the receiving data repository requirements and then uploaded into respective external data repositories. The initial use case for this framework was the Applied Proteogenomics OrganizationaL Learning and Outcomes (APOLLO) network , where clinical data were shared with the GDC , and imaging was shared with TCIA . Subsequent data transfers were made to The University of Chicago Center for Translational Data Science. Regulatory requirements were also associated with several of the technical tasks. For aggregation, formal data requests were executed to pull data from the Corporate Data Warehouse (CDW). Internal data use agreements (DUA) were required with various imaging centers to acquire imaging data. The crosswalk between patient Medical Record Numbers and external identifiers was initially not approved to exist but eventually was allowed to remain solely on a secure VA server. This approval was crucial for longitudinal data because it enabled future data sets to link with previously shared data properly. The deidentification processes were reviewed and approved by VA Information Security Officers (ISO) and Privacy Officers (PO). For data upload, submission portals for external data repositories were reviewed and approved by ISOs and POs. DUAs between the VA and data portal administrative entities were executed according to standard VA policy. Using this framework, the clinical, imaging, and genomic data of three related cohorts of cancer patients have been shared outside the VA: (i) consented patients to the GDC and TCIA; (ii) consented patients to The University of Chicago, and (iii) deceased patients to The University of Chicago. The first cohort is a component of the APOLLO network and consists of patients who have given consent for their data to be shared with outside entities and conforms to the data models of GDC and TCIA. The second cohort is like the first, except that the original CDW data model is preserved as much as possible. The third cohort is much larger than the first two and consists of unconsented deceased patients, where the data are mainly clinical with some imaging and genomic data. The first two cohorts have been approved to be downloaded by third-party users, but the third cohort is required to always remain in The University of Chicago environment and can be accessed only by trusted partners of the VHA. Lessons learned The process described in the previous section resulted in several data access configurations. The different configurations illustrate that the way the data are shared greatly affects their ability to address certain classes of clinical questions. As such, it is useful to develop a data-sharing taxonomy based on several key features to determine the advantages and disadvantages of each configuration. Data-sharing utility can be described by factors such as accessibility, data set size, data elements, associated computing environment, presence or absence of protected health information (PHI), access to healthcare system source data, and whether the data are static or longitudinal. In turn, each of these factors affects the data utility . Accessibility, for example, affects what expertise can utilize the data. The size of the data set affects the level of statistical power or the amount of training data generated for ML algorithms. The types of data elements being captured fundamentally determine the queries that can be answered. The computing resources associated with a data repository impact what types of analysis can be performed (e.g., deep learning has certain minimal computing requirements). Data sets that include PHI have higher levels of fidelity to clinical care but cannot be easily shared with collaborators. Direct access to the healthcare system can support studies that require collecting patient-reported outcomes, and longitudinal, periodically refreshed data sets can support prospective studies. Finally, conclusions made using data that do not include diverse racial and ethnic populations may not be applicable to the non-included groups and care needs to be made in making such assertions.
Although VHA has carried out many clinical and research data-sharing projects, most involve sharing explicitly defined static data sets specific to the project needs and with particular collaborators. In contrast, recent data-sharing efforts undertaken by VA Boston Healthcare System in the Research for Precision Oncology Program (RePOP) project aim to establish workflows and processes to enable the sharing of longitudinal VA data over more extended periods of time. RePOP, the research component of NPOP, was funded to consent NPOP patients to share their clinical, imaging, and genomic data with researchers to further advancements in cancer care. The overall workflow consists of 4 main technical components: data aggregation, deidentification, formatting, and upload. Aggregation identified relevant cohorts, determined data elements, and linked different modalities by patient identifier. Deidentification was performed using standard methods for external patient identifier generation, date obfuscation (TCGA), and DICOM header stripping for imaging [The Cancer Imaging Archive (TCIA)]. The data were then formatted and bundled according to the receiving data repository requirements and then uploaded into respective external data repositories. The initial use case for this framework was the Applied Proteogenomics OrganizationaL Learning and Outcomes (APOLLO) network , where clinical data were shared with the GDC , and imaging was shared with TCIA . Subsequent data transfers were made to The University of Chicago Center for Translational Data Science. Regulatory requirements were also associated with several of the technical tasks. For aggregation, formal data requests were executed to pull data from the Corporate Data Warehouse (CDW). Internal data use agreements (DUA) were required with various imaging centers to acquire imaging data. The crosswalk between patient Medical Record Numbers and external identifiers was initially not approved to exist but eventually was allowed to remain solely on a secure VA server. This approval was crucial for longitudinal data because it enabled future data sets to link with previously shared data properly. The deidentification processes were reviewed and approved by VA Information Security Officers (ISO) and Privacy Officers (PO). For data upload, submission portals for external data repositories were reviewed and approved by ISOs and POs. DUAs between the VA and data portal administrative entities were executed according to standard VA policy. Using this framework, the clinical, imaging, and genomic data of three related cohorts of cancer patients have been shared outside the VA: (i) consented patients to the GDC and TCIA; (ii) consented patients to The University of Chicago, and (iii) deceased patients to The University of Chicago. The first cohort is a component of the APOLLO network and consists of patients who have given consent for their data to be shared with outside entities and conforms to the data models of GDC and TCIA. The second cohort is like the first, except that the original CDW data model is preserved as much as possible. The third cohort is much larger than the first two and consists of unconsented deceased patients, where the data are mainly clinical with some imaging and genomic data. The first two cohorts have been approved to be downloaded by third-party users, but the third cohort is required to always remain in The University of Chicago environment and can be accessed only by trusted partners of the VHA.
The process described in the previous section resulted in several data access configurations. The different configurations illustrate that the way the data are shared greatly affects their ability to address certain classes of clinical questions. As such, it is useful to develop a data-sharing taxonomy based on several key features to determine the advantages and disadvantages of each configuration. Data-sharing utility can be described by factors such as accessibility, data set size, data elements, associated computing environment, presence or absence of protected health information (PHI), access to healthcare system source data, and whether the data are static or longitudinal. In turn, each of these factors affects the data utility . Accessibility, for example, affects what expertise can utilize the data. The size of the data set affects the level of statistical power or the amount of training data generated for ML algorithms. The types of data elements being captured fundamentally determine the queries that can be answered. The computing resources associated with a data repository impact what types of analysis can be performed (e.g., deep learning has certain minimal computing requirements). Data sets that include PHI have higher levels of fidelity to clinical care but cannot be easily shared with collaborators. Direct access to the healthcare system can support studies that require collecting patient-reported outcomes, and longitudinal, periodically refreshed data sets can support prospective studies. Finally, conclusions made using data that do not include diverse racial and ethnic populations may not be applicable to the non-included groups and care needs to be made in making such assertions.
Every stakeholder who touches patient data shares responsibility for delivering on the vision of harnessing the totality of available data to drive decision-making in favor of patients everywhere. The authors hope that by describing some of the emerging resources, we raise awareness and inspire generators, stewards, and consumers of healthcare data to consider the secondary use of such data at the earliest possible step. This will ensure the proper sharing of data to generate insights so that people suffering from cancer and their loved ones stand the best chance to benefit from the collective knowledge of the cancer community. One factor not addressed, but critical, is the representativeness of the data with respect to the intent-to-treat/study population. With the exception of PDS, each of the initiatives highlighted here is an example of real-world data and reflects the respective patient populations. Discrepancies between the observed and anticipated patient demographics can be attributed to numerous factors such as setting, community versus academic referral center, for example. Efforts to ensure that cancer care is as inclusive as possible, combined with more inclusive clinical trial enrollment will ultimately lead to more representative data sets in the near future.
|
Fetal and Perinatal Nephrology: Small but Mighty
|
55f50c11-d714-4921-8846-9745be32fbff
|
10103224
|
Internal Medicine[mh]
|
M. Liebau reports the following: Employer: University Hospital of Cologne; Consultancy: Representing the University Hospital of Cologne, M. Liebau is a member of an advisory board for Otsuka Pharma; Advisory or Leadership Role: Representing the University Hospital of Cologne, M. Liebau is a member of an advisory board for Otsuka Pharma; and Other Interests or Relationships: Chair of the WG Inherited Kidney Diseases of the European Society for Pediatric Nephrology; CoChair WG CAKUT and Ciliopathies ERKNet; Scientific Advisory Committee PKD Foundation; Scientific Committee German PKD foundation. M. Liebau reports receiving funding from the German Research Council (DFG) for cell biological projects on ARPKD (DFG LI 2397/5-1). The remaining author has nothing to disclose.
This work was supported by research Grant No. 01GM2203B from the Bundesministerium für Bildung und Forschung (M.C. Liebau).
|
Conflict Nephrology: War and Natural Disasters
|
3851b830-8ff3-4976-856a-e76338016cda
|
10103227
|
Internal Medicine[mh]
|
Natural disasters pose challenges to emergency management systems and the health of affected populations. Patients with ESKD are especially vulnerable because they are largely an older population with multiple comorbid conditions, compromised immunity, and dependence on maintenance dialysis or immunosuppressive medications to maintain allograft function. The tenuous health of kidney patients was evident in the aftermath of the perfect storm, Hurricane Katrina, when thousands of patients missed dialysis treatments and suffered poor health outcomes. , The lack of preparedness on the part of the government, dialysis providers, and patients led to unnecessary suffering and uncovered the need for greater investment in emergency planning, including expansion of the health care safety net and development of guidelines for disaster planning. In its wake, interconnected kidney community networks, such as the Kidney Community Emergency Response Coalition, arose as a bolster; no subsequent disaster has wreaked as much havoc. The coronavirus disease 2019 (COVID-19) pandemic proved to be a major threat to public health worldwide. COVID-19 has accounted for more than 92 million cases and 1,000,000 deaths in the United States. The approximately 500,000 Americans with ESKD were among the most vulnerable because most congregate to receive in-center hemodialysis treatments thrice weekly. As was the case before Hurricane Katrina, the United States was not adequately prepared. Man-made disasters, such as those brought on by the hostilities in Ukraine, create extreme risk for patients with ESKD whose survival depends on the availability of reliable sources of clean water, electricity, transportation, and trained staff. History also shows that those treated by peritoneal dialysis and those with kidney transplants are not spared during conflicts. Disaster planning remains critical as new COVID strains emerge, hostilities in Ukraine continue, and climate change strengthens storms. In evaluating the current status of emergency planning, stakeholders must recognize that normal is simply the time before the next disaster. The Disaster That Defined Failure: Hurricane Katrina In August 2005, Hurricane Katrina struck the Northern Gulf Coast states. Katrina was directly responsible for 1833 fatalities and more than $100 billion in damage. The Gulf Coast's health care system was vulnerable before the storm: In 2003, patients with ESKD were disproportionately represented among the 21% of adults in Louisiana who were uninsured. , After the hurricane's landfall, 94 dialysis facilities closed for at least one week, including 54 of 150 in Louisiana. The number of patients with ESKD receiving dialysis fell by 18%. In New Orleans, 44% of patients missed at least one and almost 17% missed three or more dialysis sessions; the adjusted odds ratio for hospitalization among the latter group was 2.16. Overall, kidney patients had poorer health outcomes because of broad lapses in the federal health care safety net, disruptions in dialysis center emergency operations, and lack of personal emergency preparation. State-based Medicaid programs maintained restrictive eligibility rules which shut out patients with ESKD. Unstable housing, lack of an easily accessible patient database, and overwhelmed landline and cellular telephone networks prevented dialysis facilities from tracking patients. Facility evacuation plans contributed to missed dialysis appointments. Many patients did not have paper copies of medical and personal information, contacts for alternate dialysis units, or a 2-week supply of medications and a nonperishable renal diet. A Decade and a Half of Progress The health effects of Hurricane Katrina underscored the need for more robust disaster preparedness. Regulatory changes to Medicare and Medicaid and creation of the Administration for Strategic Preparedness and Response along with increased proactivity in dialysis facilities placed a new emphasis on the whole community approach to disaster management. Five years after Hurricane Katrina, the Affordable Care Act expanded Medicaid eligibility to all Americans earning <133% of the federal poverty level. Had this legislation existed at the time, the overwhelming majority of Hurricane Katrina victims would have been eligible. A decade after its passage into law, 20 million Americans gained coverage. Although the Affordable Care Act has strengthened the health care safety net, its benefits have not been fully realized because most of the Gulf states affected in 2005 have not adopted the optional Medicaid expansion. The Centers for Medicare and Medicaid Services requires dialysis providers to have actionable disaster preparedness plans. Biannual audits assess four components: risk assessment and emergency planning, communication plan, policies and procedures, and training with table-top testing. Providers must demonstrate the capacity to address equipment, power, and water supply failures and ensure the continuity of patient care through coordination with other facilities. Facilities must maintain annual contact with local disaster management agencies to safeguard the needs of patients with ESKD during disasters. Dialysis providers offer patient education before a disaster, which provided enormous benefits after Hurricane Sandy in 2012. Treatments provided before Sandy made landfall significantly decreased ED visits, hospitalizations, and 30-day mortality. These successes would not have happened without improved disaster response systems and input from state health officials who encouraged rapid activation of the Kidney Community Emergency Response Coalition which was formed in the aftermath of Hurricane Katrina, to collaboratively develop, disseminate, implement, and maintain a coordinated preparedness and response framework for the kidney community. It is tasked with raising public awareness of the specific needs of individuals with kidney disease and promoting planning for dialysis services ahead of emergencies. The Kidney Community Emergency Response Coalition supports patients' personal disaster preparedness using educational webinars, pamphlets, and social media. Charting a Path through the Coronavirus Pandemic Patients with ESKD suffered disproportionate rates of COVID-19 hospitalization and fatality, especially during the early days of the pandemic. They remain at risk because of new variants, vaccine hesitancy, and questions about the effectiveness of vaccines. Finding Care during War Visiting a dialysis center may not be feasible in an active war zone. The bombing of Mariupol during the war in Ukraine killed most patients with ESKD: One survivor reported that 49 of 50 patients in the dialysis center died. The World Health Organization's Surveillance System for Attacks on Health Care documented 186 attacks on health care facilities including dialysis centers, ambulances, and medical warehouses through May 3, 2022. During the Iraqi occupation of Kuwait, the mortality of patients with ESKD remaining in Kuwait was four times greater than those who evacuated. Patients who evacuate need to navigate foreign health systems. The Polish Government announced that Ukrainian refugees would be treated similarly to their own citizens, and care would be covered by the Polish National Health Fund. Unfortunately, not all countries receiving evacuees have been as generous. In Syria, government support is not available in opposition-controlled areas; patients rely on nongovernmental organizations and private donors who lack information about dialysis operations. A 2015 survey of Syrian refugees in Jordan found that 25% of patients did not receive dialysis for at least a week, mostly because of financial constraints, and 46% of patients moved at least three times to access care. During the occupation of Kuwait, patients on automated peritoneal dialysis had 95% mortality, so the remaining patients switched to manual therapies. Governments, providers, and patients must approach disaster planning to promote resilience. Learning from the pitfalls of these events, we put forth these recommendations as broad strokes to improve health outcomes for patients with ESKD and to address barriers to patient care and well-being, especially in underserved communities. Federal, State, and Local Governments 1. Minimize the financial burden on patients Although social safety nets have expanded since Hurricane Katrina, during the COVID-19 pandemic, millions of people suffered job loss accompanied by loss of medical coverage. The same segments of the population most affected are most likely to be exposed to and die from COVID-19. These underserved communities have disproportionate rates of chronic health conditions including ESKD. These compounding issues make life especially difficult for patients with ESKD, who might face up to $10,000 in health care costs if hospitalized with COVID-19. To reduce the effect of the pandemic, Congress passed the Families First Coronavirus Response Act, which prevents insurers from charging copayments or applying deductibles to coronavirus tests, and the CARES Act to pay for out-of-network coronavirus tests. However, nationwide, 2.4% of coronavirus tests billed to insurers leave patients responsible for significant costs. 2. Invest in broadband access for telehealth expansion Although the pandemic has caused immense disruption and suffering, the crisis also provides opportunities to modernize health care delivery. During the public health emergency, regulatory waivers allowed providers to deliver and bill for services across state lines and use a variety of videoconferencing platforms to conduct virtual appointments with patients. Telehealth for patients with ESKD grants greater home-based care, less travel time, fewer trips to the clinic, increased home dialysis education, and greater patient autonomy and self-care. Telemedicine is likely to become a norm for health care delivery. However, lack of broadband access is a significant barrier for rural, underserved, and older populations. Only 43% of those aged 65 years or older, 70% of urban Americans, and 62% of rural residents have broadband access at home. Racial disparities, levels of education, and income correlate with lack of access for telehealth visits. Necessary digital equipment must also be readily available, and solutions, such as nonprofit partnerships, to redistribute refurbished devices should be pursued. Dialysis Providers 1. Re-evaluate operational protocols The foremost challenge of the COVID-19 pandemic is ensuring that infected patients and staff do not expose others during transportation to, and treatment at, dialysis facilities. Facilities had to adapt: changing patients' scheduled treatments, enhancing cleaning of treatment stations, minimizing the time patients spend in waiting areas, maximizing the distance between them, and changing placement of tissues and waste receptacles. Scheduling and spacing of staff breaks were also critical. 2. Ensure self-sufficiency for vaccine distribution Because of their multiple comorbid conditions and inability to reduce exposure by staying at home, patients with ESKD were encouraged by their dialysis providers to receive early vaccination against COVID-19. Working collaboratively with the Centers for Disease Control and Prevention, the White House, and the American Society of Nephrology, providers distributed vaccines to all patients quickly, and uptake was high among this vulnerable population. Patients 1. Improve digital literacy Telemedicine appointments are more convenient and encourage greater patient autonomy, but adjusting to the technology can be a significant barrier to wide adoption. Free and low-cost educational resources on technology basics are available. Patients with ESKD can engage with kidney community social media pages and attend educational workshops online. Using this crisis as an opportunity to become tech savvy is an investment in one's health and well-being that will pay off as health care becomes increasingly digital. 2. Invest in mental health Disasters wreak havoc on mental health and can make coping with everyday necessities difficult. The psychological toll is disturbingly evident years after Hurricane Katrina, with rates of anxiety, depression, post-traumatic stress disorder, addiction, domestic violence, and murder all significantly higher than the years before the storm. The COVID-19 pandemic had unprecedented geographic scope and human toll. No one is immune from the disruption of normalcy, sense of danger, social isolation, financial instability, trauma of hospitalization, and loss of hundreds of millions of lives. Patients with ESKD and their caregivers must fortify coping strategies to maintain the behaviors that prevent exposure to the virus and bolster their overall health. One positive element of this crisis is that discussing mental health has become more mainstream and less stigmatized. Many free resources and tools are available to add to a healthy lifestyle. App-based guided meditations, communal streaming platforms, and online therapy can be incorporated into an everyday mental wellness routine. Engaging with kidney community social networks can help alleviate the particular stress and anxieties of patients with ESKD.
In August 2005, Hurricane Katrina struck the Northern Gulf Coast states. Katrina was directly responsible for 1833 fatalities and more than $100 billion in damage. The Gulf Coast's health care system was vulnerable before the storm: In 2003, patients with ESKD were disproportionately represented among the 21% of adults in Louisiana who were uninsured. , After the hurricane's landfall, 94 dialysis facilities closed for at least one week, including 54 of 150 in Louisiana. The number of patients with ESKD receiving dialysis fell by 18%. In New Orleans, 44% of patients missed at least one and almost 17% missed three or more dialysis sessions; the adjusted odds ratio for hospitalization among the latter group was 2.16. Overall, kidney patients had poorer health outcomes because of broad lapses in the federal health care safety net, disruptions in dialysis center emergency operations, and lack of personal emergency preparation. State-based Medicaid programs maintained restrictive eligibility rules which shut out patients with ESKD. Unstable housing, lack of an easily accessible patient database, and overwhelmed landline and cellular telephone networks prevented dialysis facilities from tracking patients. Facility evacuation plans contributed to missed dialysis appointments. Many patients did not have paper copies of medical and personal information, contacts for alternate dialysis units, or a 2-week supply of medications and a nonperishable renal diet.
The health effects of Hurricane Katrina underscored the need for more robust disaster preparedness. Regulatory changes to Medicare and Medicaid and creation of the Administration for Strategic Preparedness and Response along with increased proactivity in dialysis facilities placed a new emphasis on the whole community approach to disaster management. Five years after Hurricane Katrina, the Affordable Care Act expanded Medicaid eligibility to all Americans earning <133% of the federal poverty level. Had this legislation existed at the time, the overwhelming majority of Hurricane Katrina victims would have been eligible. A decade after its passage into law, 20 million Americans gained coverage. Although the Affordable Care Act has strengthened the health care safety net, its benefits have not been fully realized because most of the Gulf states affected in 2005 have not adopted the optional Medicaid expansion. The Centers for Medicare and Medicaid Services requires dialysis providers to have actionable disaster preparedness plans. Biannual audits assess four components: risk assessment and emergency planning, communication plan, policies and procedures, and training with table-top testing. Providers must demonstrate the capacity to address equipment, power, and water supply failures and ensure the continuity of patient care through coordination with other facilities. Facilities must maintain annual contact with local disaster management agencies to safeguard the needs of patients with ESKD during disasters. Dialysis providers offer patient education before a disaster, which provided enormous benefits after Hurricane Sandy in 2012. Treatments provided before Sandy made landfall significantly decreased ED visits, hospitalizations, and 30-day mortality. These successes would not have happened without improved disaster response systems and input from state health officials who encouraged rapid activation of the Kidney Community Emergency Response Coalition which was formed in the aftermath of Hurricane Katrina, to collaboratively develop, disseminate, implement, and maintain a coordinated preparedness and response framework for the kidney community. It is tasked with raising public awareness of the specific needs of individuals with kidney disease and promoting planning for dialysis services ahead of emergencies. The Kidney Community Emergency Response Coalition supports patients' personal disaster preparedness using educational webinars, pamphlets, and social media.
Patients with ESKD suffered disproportionate rates of COVID-19 hospitalization and fatality, especially during the early days of the pandemic. They remain at risk because of new variants, vaccine hesitancy, and questions about the effectiveness of vaccines.
Visiting a dialysis center may not be feasible in an active war zone. The bombing of Mariupol during the war in Ukraine killed most patients with ESKD: One survivor reported that 49 of 50 patients in the dialysis center died. The World Health Organization's Surveillance System for Attacks on Health Care documented 186 attacks on health care facilities including dialysis centers, ambulances, and medical warehouses through May 3, 2022. During the Iraqi occupation of Kuwait, the mortality of patients with ESKD remaining in Kuwait was four times greater than those who evacuated. Patients who evacuate need to navigate foreign health systems. The Polish Government announced that Ukrainian refugees would be treated similarly to their own citizens, and care would be covered by the Polish National Health Fund. Unfortunately, not all countries receiving evacuees have been as generous. In Syria, government support is not available in opposition-controlled areas; patients rely on nongovernmental organizations and private donors who lack information about dialysis operations. A 2015 survey of Syrian refugees in Jordan found that 25% of patients did not receive dialysis for at least a week, mostly because of financial constraints, and 46% of patients moved at least three times to access care. During the occupation of Kuwait, patients on automated peritoneal dialysis had 95% mortality, so the remaining patients switched to manual therapies. Governments, providers, and patients must approach disaster planning to promote resilience. Learning from the pitfalls of these events, we put forth these recommendations as broad strokes to improve health outcomes for patients with ESKD and to address barriers to patient care and well-being, especially in underserved communities.
1. Minimize the financial burden on patients Although social safety nets have expanded since Hurricane Katrina, during the COVID-19 pandemic, millions of people suffered job loss accompanied by loss of medical coverage. The same segments of the population most affected are most likely to be exposed to and die from COVID-19. These underserved communities have disproportionate rates of chronic health conditions including ESKD. These compounding issues make life especially difficult for patients with ESKD, who might face up to $10,000 in health care costs if hospitalized with COVID-19. To reduce the effect of the pandemic, Congress passed the Families First Coronavirus Response Act, which prevents insurers from charging copayments or applying deductibles to coronavirus tests, and the CARES Act to pay for out-of-network coronavirus tests. However, nationwide, 2.4% of coronavirus tests billed to insurers leave patients responsible for significant costs. 2. Invest in broadband access for telehealth expansion Although the pandemic has caused immense disruption and suffering, the crisis also provides opportunities to modernize health care delivery. During the public health emergency, regulatory waivers allowed providers to deliver and bill for services across state lines and use a variety of videoconferencing platforms to conduct virtual appointments with patients. Telehealth for patients with ESKD grants greater home-based care, less travel time, fewer trips to the clinic, increased home dialysis education, and greater patient autonomy and self-care. Telemedicine is likely to become a norm for health care delivery. However, lack of broadband access is a significant barrier for rural, underserved, and older populations. Only 43% of those aged 65 years or older, 70% of urban Americans, and 62% of rural residents have broadband access at home. Racial disparities, levels of education, and income correlate with lack of access for telehealth visits. Necessary digital equipment must also be readily available, and solutions, such as nonprofit partnerships, to redistribute refurbished devices should be pursued.
Although social safety nets have expanded since Hurricane Katrina, during the COVID-19 pandemic, millions of people suffered job loss accompanied by loss of medical coverage. The same segments of the population most affected are most likely to be exposed to and die from COVID-19. These underserved communities have disproportionate rates of chronic health conditions including ESKD. These compounding issues make life especially difficult for patients with ESKD, who might face up to $10,000 in health care costs if hospitalized with COVID-19. To reduce the effect of the pandemic, Congress passed the Families First Coronavirus Response Act, which prevents insurers from charging copayments or applying deductibles to coronavirus tests, and the CARES Act to pay for out-of-network coronavirus tests. However, nationwide, 2.4% of coronavirus tests billed to insurers leave patients responsible for significant costs.
Although the pandemic has caused immense disruption and suffering, the crisis also provides opportunities to modernize health care delivery. During the public health emergency, regulatory waivers allowed providers to deliver and bill for services across state lines and use a variety of videoconferencing platforms to conduct virtual appointments with patients. Telehealth for patients with ESKD grants greater home-based care, less travel time, fewer trips to the clinic, increased home dialysis education, and greater patient autonomy and self-care. Telemedicine is likely to become a norm for health care delivery. However, lack of broadband access is a significant barrier for rural, underserved, and older populations. Only 43% of those aged 65 years or older, 70% of urban Americans, and 62% of rural residents have broadband access at home. Racial disparities, levels of education, and income correlate with lack of access for telehealth visits. Necessary digital equipment must also be readily available, and solutions, such as nonprofit partnerships, to redistribute refurbished devices should be pursued.
1. Re-evaluate operational protocols The foremost challenge of the COVID-19 pandemic is ensuring that infected patients and staff do not expose others during transportation to, and treatment at, dialysis facilities. Facilities had to adapt: changing patients' scheduled treatments, enhancing cleaning of treatment stations, minimizing the time patients spend in waiting areas, maximizing the distance between them, and changing placement of tissues and waste receptacles. Scheduling and spacing of staff breaks were also critical. 2. Ensure self-sufficiency for vaccine distribution Because of their multiple comorbid conditions and inability to reduce exposure by staying at home, patients with ESKD were encouraged by their dialysis providers to receive early vaccination against COVID-19. Working collaboratively with the Centers for Disease Control and Prevention, the White House, and the American Society of Nephrology, providers distributed vaccines to all patients quickly, and uptake was high among this vulnerable population.
The foremost challenge of the COVID-19 pandemic is ensuring that infected patients and staff do not expose others during transportation to, and treatment at, dialysis facilities. Facilities had to adapt: changing patients' scheduled treatments, enhancing cleaning of treatment stations, minimizing the time patients spend in waiting areas, maximizing the distance between them, and changing placement of tissues and waste receptacles. Scheduling and spacing of staff breaks were also critical.
Because of their multiple comorbid conditions and inability to reduce exposure by staying at home, patients with ESKD were encouraged by their dialysis providers to receive early vaccination against COVID-19. Working collaboratively with the Centers for Disease Control and Prevention, the White House, and the American Society of Nephrology, providers distributed vaccines to all patients quickly, and uptake was high among this vulnerable population.
1. Improve digital literacy Telemedicine appointments are more convenient and encourage greater patient autonomy, but adjusting to the technology can be a significant barrier to wide adoption. Free and low-cost educational resources on technology basics are available. Patients with ESKD can engage with kidney community social media pages and attend educational workshops online. Using this crisis as an opportunity to become tech savvy is an investment in one's health and well-being that will pay off as health care becomes increasingly digital. 2. Invest in mental health Disasters wreak havoc on mental health and can make coping with everyday necessities difficult. The psychological toll is disturbingly evident years after Hurricane Katrina, with rates of anxiety, depression, post-traumatic stress disorder, addiction, domestic violence, and murder all significantly higher than the years before the storm. The COVID-19 pandemic had unprecedented geographic scope and human toll. No one is immune from the disruption of normalcy, sense of danger, social isolation, financial instability, trauma of hospitalization, and loss of hundreds of millions of lives. Patients with ESKD and their caregivers must fortify coping strategies to maintain the behaviors that prevent exposure to the virus and bolster their overall health. One positive element of this crisis is that discussing mental health has become more mainstream and less stigmatized. Many free resources and tools are available to add to a healthy lifestyle. App-based guided meditations, communal streaming platforms, and online therapy can be incorporated into an everyday mental wellness routine. Engaging with kidney community social networks can help alleviate the particular stress and anxieties of patients with ESKD.
Telemedicine appointments are more convenient and encourage greater patient autonomy, but adjusting to the technology can be a significant barrier to wide adoption. Free and low-cost educational resources on technology basics are available. Patients with ESKD can engage with kidney community social media pages and attend educational workshops online. Using this crisis as an opportunity to become tech savvy is an investment in one's health and well-being that will pay off as health care becomes increasingly digital.
Disasters wreak havoc on mental health and can make coping with everyday necessities difficult. The psychological toll is disturbingly evident years after Hurricane Katrina, with rates of anxiety, depression, post-traumatic stress disorder, addiction, domestic violence, and murder all significantly higher than the years before the storm. The COVID-19 pandemic had unprecedented geographic scope and human toll. No one is immune from the disruption of normalcy, sense of danger, social isolation, financial instability, trauma of hospitalization, and loss of hundreds of millions of lives. Patients with ESKD and their caregivers must fortify coping strategies to maintain the behaviors that prevent exposure to the virus and bolster their overall health. One positive element of this crisis is that discussing mental health has become more mainstream and less stigmatized. Many free resources and tools are available to add to a healthy lifestyle. App-based guided meditations, communal streaming platforms, and online therapy can be incorporated into an everyday mental wellness routine. Engaging with kidney community social networks can help alleviate the particular stress and anxieties of patients with ESKD.
Disasters like Hurricane Katrina, the coronavirus pandemic, and human conflict shock our nation's health care system and lead to dire consequences. However, they offer a chance to rebuild a more efficient, equitable, and resilient system. Preparation and communication are crucial to ensuring that patients have access to essential health care. Although major changes after Hurricane Katrina led to great improvements, preparation remained inadequate at all levels for the COVID-19 pandemic. Taking from the lessons learned from those on the frontlines battling these disasters, our recommendations can mitigate complications and ameliorate mortality in highly vulnerable patients.
|
Development of an assessment tool to measure communication skills among family medicine residents in the context of electronic medical record use
|
d7d61546-3016-49ec-90bc-62c27556ad65
|
10103454
|
Family Medicine[mh]
|
Electrical Medical Records (EMRs) have been widely implemented due to their proven ability to enhance care efficiency by increasing the time of physicians with their patients, limiting prescription errors, and promoting shared decision making . On the other hand, the use of EMR has introduced a new set of EMR-specific communication skills because of its impact on eye contact, bridging trust, and the overall relationship between doctor and patient, and because of its effect on the overall room layout which may appear as an obstacle disabling proper communication . This highlights the need to train physicians on how to balance the use of the computer and communication with the patients . Many have proposed new skills, models, or curricula to integrate patient-centered communication within the medical visit in the era of EMR in the form of workshops , practice role-plays and brief didactics . Nevertheless, there is a need for assessment tools for measuring physicians’ communication skills when using EMR. Many methods have been used to evaluate the residents’ communication skills in general, including (1) direct observation (Mini Clinical Evaluation Exercise (MINI-CEX) and video review); (2) standardized patients (Objective Standardized Clinical examinations- OSCE); (3) patient surveys; (4) self-assessment, and (5) peer evaluation (360-degrees evaluations). For direct observation, various validated checklists are employed to measure the communication skills of residents, namely “Kalamazoo Essential Elements: the communication checklist” , “MAAS – Global Rating List for Consultation skills of Doctors” , and the SEGUE Framework . However, none of these checklists address the EMR-specific communication skills. A systematic review of the existing assessment tools for the evaluation of communication skills among physicians has found eight tools (out of 45 assessment tools) that were used more frequently but none of them tackled the EMR-specific communication skills . Thus, there is a need to develop a validated tool that assesses EMR-specific communication skills among residents. Only a few available articles tackled EMR-specific communication skills. Morrow et al., were the first to assess EMR communication skills using two checklists in pre-developed scenarios among first-year medical students after a brief educational intervention, one for basic communication skills while the other for EMR-related communication skills . However, this study was conducted in 2007 when EMRs were just being used in the clinic and used a small sample size. By adopting the same checklist, Hassid et al. compared physicians’ scores on SEGUE and Morrow et al.’s EMR-specific communication skills checklist during videotaped simulated medical encounters . There was a difference in the scores between both tools with consistent lower scores on EMR-specific communication skills. Similarly, Biagoili et al. assessed the EMR-specific communication skills of students while using EMR in a simulated environment, extending Morrow et al.’s work to include EMR-related data management skills while interacting with patients . However, all three articles adopted checklists that were not formally validated. In 2017, the first validated tool was developed by Alkhureishia et al. as an Electronic-Clinical Evaluation Exercise tool (e-CEX) to test the EMR-communication skills of second-year medical students in the context of OSCE . The checklist included 10 items related to EMR-specific communication skills only. Therefore, this study aims to develop and validate the psychometric properties of a single checklist that would include both the basic and EMR- related communication skills in the context of direct observation of real patients in a family medicine residency program.
Scale development Items development An extensive literature review was conducted, focusing on the use of computer and electronic medical records in the clinical setting, as well as the impact on communication skills. The findings of the literature review provided a good understanding of the doctor-patient-computer triad, which is influenced by both the physician’s clinical and interpersonal skills. To develop the measurable items for clinical skills, the SEGUE framework (setting up the stage, eliciting information, providing information, understanding patient perspective, concluding the interview) was selected. This framework has been noted to have a high level of acceptability, the ability to be used reliably, evidence of validity, and the ability to apply to a variety of contexts . A set of 28 carefully selected items were developed and distributed across the framework’s various branches. The items were chosen based on the studies that found a positive or negative correlation with specific behaviors during the medical interview incorporating the computer. Items include interacting with the patient rather than the computer at the beginning of the interview , avoiding the use of a computer when addressing a psychological burden , alternating gazes between screen and patient , and spatial rearrangements of the room for easy access to all members of the triad . As a result, these items, among others, were used to create the final assessment tool. A set of skills representing relational and process-oriented items were included for the physician’s interpersonal skills. Examples include the physician’s ability to maintain an empathic approach while not being distracted by the computer’s presence, physician comfort during the interview process despite the presence of the computer, and, most importantly, the ability to maintain a patient-centered interview while incorporating the computer. Scoring A scaled grading approach was used to account for the presence and the quality of the measured skill or item. The scale for the clinical skills included a Likert scale, which is commonly used in medical education assessment and allows to reduce measurement sensitivity and differentiation in the quality of their performance on each specific task . The Likert scale ranged from not done (0), poorly done, adequately done, and well done (6), with an option of “not applicable” included. The same approach was used in grading interpersonal skills, but the emphasis was on the physician maintaining the measurable attributes throughout the interview. The rater would be asked to rate the overall resident’s interpersonal skills during the encounter based on a Likert scale ranging from absent (0), not consistently applied, consistently applied, to exceptionally applied (6). The total score was calculated as the sum of the scores on each item divided by the number of applicable items multiplied by 100. The same method of scoring is applied to the 5 subcategories of the checklist (setting the stage, eliciting information, giving information, understanding patient perspective, ending the encounter and interpersonal skills). Content validity A list of proposed items was developed. The communication skills working group (CSWG) at the family medicine department at the American University of Beirut includes four family physicians. The CSWG were considered the expert panel and every member was asked to rate every proposed item individually on a 5-Point-Likert scale that ranges from “totally agree” to “totally disagree.” They provided comments on proposed items, sentence structure and were free to suggest new items. An in-person meeting followed where all the collective results of their individual ratings were discussed. Each item, especially the ones that most of the groups disagreed upon, was discussed concerning the importance and clarity of the statement. A modified set of the items was developed and sent again for the CSWG members to rate them on their own. This process kept going till we reached the final set of items where at least 3 members agreed upon. Three rounds were performed, and the final set of items can be found in Appendix 1. Implementation of the scale All the family medicine department residents’ clinics are equipped with a ceiling-mounted camera that captures part of the room where the history taking occurs. The examination table area is not captured. There is a sign in all the residents’ rooms stating the presence of the camera surveillance. The current practice in the clinic is that the preceptor can monitor any resident through the video monitor. The clinic’s policy mandates that the patient signs a written consent only if the encounter is videotaped; it is the nurse’s responsibility. Each resident has prescheduled clinic sessions per month. While the resident is attending to actual patients during a session, a faculty member sits in the preceptor room where the video monitor is present. During the research period, the assessment nurse approached all the patients visiting the second- and third-year residents who agreed to participate in the research. The nurses requested permission from the patients to videotape the interview. If they agreed, the nurse obtained their signatures on the necessary forms, including the appropriate forms per the clinic policy and the research-related informed consent. The nurse then handed the patients a questionnaire, Communication Assessment Tool (CAT), which they were to fill out privately in the waiting area after their visit with the resident and asked to return the questionnaire in a sealed envelope. The relevant resident-patient encounter was retrieved from the surveillance system and saved in a password-protected folder. The same code was assigned to both the recorded video and the CAT. The family medicine residency program is a three-year training program, accredited by the Accreditation Council for Graduate Medical Education- International (ACGME-I). Residents who plan to sit for the Arab board can have a four-year program. Training occurs in the main family medicine practice center at the American University of Beirut along with other satellite clinics. First-year residents were excluded because they have infrequent clinic sessions and are still learning how to use the electronic system. Last-fourth-year residents were also excluded as they spent most of their time in clinics located outside the main center. Psychometric properties of the assessment tool Eight raters were assigned to rate the residents’ recorded encounters based on the developed scale. Every rater evaluated the same video encounter twice, three weeks apart. We aimed to have a diversified group of raters. The group of raters included the members of the CSWG, a faculty member who is the physician lead for assessments at the medical school and associate program director for the Internal Medicine residency program, two recently graduated medical doctors (to give their perspective as students), and the senior graduate medical education (GME) program coordinator at the department of family medicine (to give her perspective as patient with some experience in medical education). The raters completed an evaluation form about the ease of administering the assessment tool, including its friendliness and length. The above procedure allows for measurement of test-retest reliability as the same rater evaluated the same video encounter on two occasions, separated by three weeks. Inter-rater reliability was measured by comparing the ratings of different preceptors of the same video on individual items and the overall score. The criterion validity of the checklist was measured by comparing the residents’ scores on the developed checklist to the patient’s CAT score. A variety of medical cases with varying chief complaints ensured generalizability. Statistical analysis Descriptive analysis was performed to describe the number of residents, clinical encounters, scores on each item and total score, and the satisfaction of the raters. Cronbach’s alpha was used to measure the scale reliability. The interclass coefficient and Pearson correlation were used to measure inter-rater reliability and test-retest reliability, respectively. Each encounter was either rated by two or four raters, depending on the availability of the raters. For inter-reliability, all permutations of paired raters were used to calculate the interclass coefficient and one-way random-effect model was used. Spearman correlation was used to compare the developed scale score and CAT score as the CAT score was not normally distributed. P-value was set at 0.05 for statistical significance. SPSS version 27 was used for statistical analysis.
Items development An extensive literature review was conducted, focusing on the use of computer and electronic medical records in the clinical setting, as well as the impact on communication skills. The findings of the literature review provided a good understanding of the doctor-patient-computer triad, which is influenced by both the physician’s clinical and interpersonal skills. To develop the measurable items for clinical skills, the SEGUE framework (setting up the stage, eliciting information, providing information, understanding patient perspective, concluding the interview) was selected. This framework has been noted to have a high level of acceptability, the ability to be used reliably, evidence of validity, and the ability to apply to a variety of contexts . A set of 28 carefully selected items were developed and distributed across the framework’s various branches. The items were chosen based on the studies that found a positive or negative correlation with specific behaviors during the medical interview incorporating the computer. Items include interacting with the patient rather than the computer at the beginning of the interview , avoiding the use of a computer when addressing a psychological burden , alternating gazes between screen and patient , and spatial rearrangements of the room for easy access to all members of the triad . As a result, these items, among others, were used to create the final assessment tool. A set of skills representing relational and process-oriented items were included for the physician’s interpersonal skills. Examples include the physician’s ability to maintain an empathic approach while not being distracted by the computer’s presence, physician comfort during the interview process despite the presence of the computer, and, most importantly, the ability to maintain a patient-centered interview while incorporating the computer. Scoring A scaled grading approach was used to account for the presence and the quality of the measured skill or item. The scale for the clinical skills included a Likert scale, which is commonly used in medical education assessment and allows to reduce measurement sensitivity and differentiation in the quality of their performance on each specific task . The Likert scale ranged from not done (0), poorly done, adequately done, and well done (6), with an option of “not applicable” included. The same approach was used in grading interpersonal skills, but the emphasis was on the physician maintaining the measurable attributes throughout the interview. The rater would be asked to rate the overall resident’s interpersonal skills during the encounter based on a Likert scale ranging from absent (0), not consistently applied, consistently applied, to exceptionally applied (6). The total score was calculated as the sum of the scores on each item divided by the number of applicable items multiplied by 100. The same method of scoring is applied to the 5 subcategories of the checklist (setting the stage, eliciting information, giving information, understanding patient perspective, ending the encounter and interpersonal skills). Content validity A list of proposed items was developed. The communication skills working group (CSWG) at the family medicine department at the American University of Beirut includes four family physicians. The CSWG were considered the expert panel and every member was asked to rate every proposed item individually on a 5-Point-Likert scale that ranges from “totally agree” to “totally disagree.” They provided comments on proposed items, sentence structure and were free to suggest new items. An in-person meeting followed where all the collective results of their individual ratings were discussed. Each item, especially the ones that most of the groups disagreed upon, was discussed concerning the importance and clarity of the statement. A modified set of the items was developed and sent again for the CSWG members to rate them on their own. This process kept going till we reached the final set of items where at least 3 members agreed upon. Three rounds were performed, and the final set of items can be found in Appendix 1.
An extensive literature review was conducted, focusing on the use of computer and electronic medical records in the clinical setting, as well as the impact on communication skills. The findings of the literature review provided a good understanding of the doctor-patient-computer triad, which is influenced by both the physician’s clinical and interpersonal skills. To develop the measurable items for clinical skills, the SEGUE framework (setting up the stage, eliciting information, providing information, understanding patient perspective, concluding the interview) was selected. This framework has been noted to have a high level of acceptability, the ability to be used reliably, evidence of validity, and the ability to apply to a variety of contexts . A set of 28 carefully selected items were developed and distributed across the framework’s various branches. The items were chosen based on the studies that found a positive or negative correlation with specific behaviors during the medical interview incorporating the computer. Items include interacting with the patient rather than the computer at the beginning of the interview , avoiding the use of a computer when addressing a psychological burden , alternating gazes between screen and patient , and spatial rearrangements of the room for easy access to all members of the triad . As a result, these items, among others, were used to create the final assessment tool. A set of skills representing relational and process-oriented items were included for the physician’s interpersonal skills. Examples include the physician’s ability to maintain an empathic approach while not being distracted by the computer’s presence, physician comfort during the interview process despite the presence of the computer, and, most importantly, the ability to maintain a patient-centered interview while incorporating the computer.
A scaled grading approach was used to account for the presence and the quality of the measured skill or item. The scale for the clinical skills included a Likert scale, which is commonly used in medical education assessment and allows to reduce measurement sensitivity and differentiation in the quality of their performance on each specific task . The Likert scale ranged from not done (0), poorly done, adequately done, and well done (6), with an option of “not applicable” included. The same approach was used in grading interpersonal skills, but the emphasis was on the physician maintaining the measurable attributes throughout the interview. The rater would be asked to rate the overall resident’s interpersonal skills during the encounter based on a Likert scale ranging from absent (0), not consistently applied, consistently applied, to exceptionally applied (6). The total score was calculated as the sum of the scores on each item divided by the number of applicable items multiplied by 100. The same method of scoring is applied to the 5 subcategories of the checklist (setting the stage, eliciting information, giving information, understanding patient perspective, ending the encounter and interpersonal skills).
A list of proposed items was developed. The communication skills working group (CSWG) at the family medicine department at the American University of Beirut includes four family physicians. The CSWG were considered the expert panel and every member was asked to rate every proposed item individually on a 5-Point-Likert scale that ranges from “totally agree” to “totally disagree.” They provided comments on proposed items, sentence structure and were free to suggest new items. An in-person meeting followed where all the collective results of their individual ratings were discussed. Each item, especially the ones that most of the groups disagreed upon, was discussed concerning the importance and clarity of the statement. A modified set of the items was developed and sent again for the CSWG members to rate them on their own. This process kept going till we reached the final set of items where at least 3 members agreed upon. Three rounds were performed, and the final set of items can be found in Appendix 1.
All the family medicine department residents’ clinics are equipped with a ceiling-mounted camera that captures part of the room where the history taking occurs. The examination table area is not captured. There is a sign in all the residents’ rooms stating the presence of the camera surveillance. The current practice in the clinic is that the preceptor can monitor any resident through the video monitor. The clinic’s policy mandates that the patient signs a written consent only if the encounter is videotaped; it is the nurse’s responsibility. Each resident has prescheduled clinic sessions per month. While the resident is attending to actual patients during a session, a faculty member sits in the preceptor room where the video monitor is present. During the research period, the assessment nurse approached all the patients visiting the second- and third-year residents who agreed to participate in the research. The nurses requested permission from the patients to videotape the interview. If they agreed, the nurse obtained their signatures on the necessary forms, including the appropriate forms per the clinic policy and the research-related informed consent. The nurse then handed the patients a questionnaire, Communication Assessment Tool (CAT), which they were to fill out privately in the waiting area after their visit with the resident and asked to return the questionnaire in a sealed envelope. The relevant resident-patient encounter was retrieved from the surveillance system and saved in a password-protected folder. The same code was assigned to both the recorded video and the CAT. The family medicine residency program is a three-year training program, accredited by the Accreditation Council for Graduate Medical Education- International (ACGME-I). Residents who plan to sit for the Arab board can have a four-year program. Training occurs in the main family medicine practice center at the American University of Beirut along with other satellite clinics. First-year residents were excluded because they have infrequent clinic sessions and are still learning how to use the electronic system. Last-fourth-year residents were also excluded as they spent most of their time in clinics located outside the main center.
Eight raters were assigned to rate the residents’ recorded encounters based on the developed scale. Every rater evaluated the same video encounter twice, three weeks apart. We aimed to have a diversified group of raters. The group of raters included the members of the CSWG, a faculty member who is the physician lead for assessments at the medical school and associate program director for the Internal Medicine residency program, two recently graduated medical doctors (to give their perspective as students), and the senior graduate medical education (GME) program coordinator at the department of family medicine (to give her perspective as patient with some experience in medical education). The raters completed an evaluation form about the ease of administering the assessment tool, including its friendliness and length. The above procedure allows for measurement of test-retest reliability as the same rater evaluated the same video encounter on two occasions, separated by three weeks. Inter-rater reliability was measured by comparing the ratings of different preceptors of the same video on individual items and the overall score. The criterion validity of the checklist was measured by comparing the residents’ scores on the developed checklist to the patient’s CAT score. A variety of medical cases with varying chief complaints ensured generalizability.
Descriptive analysis was performed to describe the number of residents, clinical encounters, scores on each item and total score, and the satisfaction of the raters. Cronbach’s alpha was used to measure the scale reliability. The interclass coefficient and Pearson correlation were used to measure inter-rater reliability and test-retest reliability, respectively. Each encounter was either rated by two or four raters, depending on the availability of the raters. For inter-reliability, all permutations of paired raters were used to calculate the interclass coefficient and one-way random-effect model was used. Spearman correlation was used to compare the developed scale score and CAT score as the CAT score was not normally distributed. P-value was set at 0.05 for statistical significance. SPSS version 27 was used for statistical analysis.
A total of 8 residents agreed to participate in the research. The study extended over one academic year. Twenty-one clinical encounters were recorded. The average length of the encounters was 15.6 ± 6.3 min. Most of the encounters were for acute complaints: foreign body in the eye, musculoskeletal complaints, chest pain, fever, diarrhea, urinary symptoms, upper respiratory tract infections, with very few included general chief complaints such as checkups, ordering some lab test and well-baby. The age of the patients varied between 5 and 62 years old, with 61.9% being female patients. The average total score was 65.2 ± 6.9 and 48.1 ± 9.5 for the developed scale and the CAT scale, respectively. The scoring of each item is shown in Appendix 2. The correlation between the CAT score and the developed checklist score was 0.215, p-value 0.461. The scale reliability was good, with a Cronbach alpha of 0.694. The test-retest reliability was 0.873, p < 0.0001. Some encounters were rated by more than 2 raters. The final analysis was based on a total of 52 pairs. For the total score on the developed checklist, the intraclass correlation coefficient between raters (ICC) was 0.429 [0.030,0.665], p-value of 0.019 (Table ). The levels of agreement between any two raters for the individual items of the assessment criteria ranged from kappa = 0.359 (item 3) to kappa = 0.693 (item 4) (data not shown). The levels of agreement between any two raters on cumulative score of setting the stage was not significant. The level of agreement between any two raters on a cumulative score of the other 5 categories ranged from 0.506 (interpersonal skills) to 0.969 (end encounter). The level of agreement between any two raters on all the items was highest among the pair of family medicine/graduate medical student followed by the pair of two-family physicians (Table ). Regarding the use of the assessment tool, all 7 raters totally agreed/agreed that the length of the assessment tool was adequate. One rater disagreed on the statement that it was easy to observe and evaluate the behavior. Two raters considered some of the sentences to be unclear or not easy to understand.
Appropriate use of EMR while still maintaining meaningful and engaging interaction with patients is an important skill. The literature is scarce regarding validated assessment tools to measure EMR related communication skills. This study aimed to develop and validate one single checklist that tackles both general communication skills and EMR-related communication skills of family medicine residents. The scale reliability was good, with a Cronbach alpha of 0.694. The test-retest reliability was 0.873, p < 0.0001. The level of agreement between any two raters on the total checklist score was 0.429 [0.030,0.665]. Although the interrater reliability was poor-moderate for the total scale score, the interrater reliability was moderate for eliciting information, giving information, understanding patient perspective and interpersonal skills and excellent in ending the encounter section. Setting the stage had the least interrater reliability of 0.047. Two items related to setting the stage were scored low by the raters, mainly introducing the computer, and reassuring the patient regarding confidentiality of EMR. With the expanded use of the computers in the daily activities, it is possible that physicians do not feel the need to introduce the computer. Patients consider the use of EMR in the clinic as a normal process and part of the physician’s work . Moreover, physicians may consider that confidentiality of data is standard of care and does not need to be explained to the patient in every single encounter except in specific cases where sensitive information is going to be discussed. The literature is scarce regarding checklists that measure EMR-related communication skills to compare the validity and reliability of the tool. The most relevant validated tool is the e-CEX developed by Alkureishi et al. among medical students . In the e-CEX validation, the authors have studied discriminant validity between the e-CEX and standardized patients’ score and did not measure interrater reliability. In this study, we compared the checklist scores to the CAT score which is a reliable and valid instrument for measuring patients’ perception of physician communication skills in the context of EMR . Nevertheless, there was a poor correlation between the CAT and checklist score. One explanation could be that patients tend to rate positively their physicians, or patients pay attention to different communication skills that academics look at. Another explanation is that CAT measures basic communication skills. Physicians who rated good on basic communication skills had lower scores on EMR related skills . Further research should be conducted to measure the criterion validity by comparing to other faculty-based assessment measurements. Regarding the interrater reliability, the inter-class correlation coefficient of min-CEX clinical skills assessment among medical trainees ranged from 0.66 to 0.81 in different clinical scenarios . A systematic review of 45 existing assessment tools to evaluate basic communication skills have shown poor-moderate psychometric properties . Measuring communication skills is a challenging task given that it has a subjective component and may differ in different clinical settings such as medical students, specialty or practicing physicians. Our study had several strengths like using a variety of rater backgrounds. Most of the literature on basic communication skills tools involves standardized patients in simulated environments where the learners are aware of their behaviors . This study used videos of real patient encounters in a primary care setting. Moreover, this checklist combines both EMR and general communication skills. As for the study’s limitations, the residents involved in the study did not receive formal training in EMR-related communications skills. Moreover, the small number of residents who agreed to participate could lead to selection bias. Another limitation is the generalizability to other disciplines, especially that the interrater reliability between family medicine/internal medicine was low and it was based on a single institution. Practical and research implications This tool is a valid starting point taking into consideration the lack of rigorous current checklists that measure EMR-related communication skills. This study has proven the validity and reliability of the tool. However, further research and optimization of the form is needed. It is worth re-structuring the form into three sections: basic skills, EMR-related skills and interpersonal skills. As EMRs become more established and standard of care in the future, some items may become obsolete that require a modified shorter form. A large sample with diverse types of residents may be warranted to increase generalizability of the tool. To improve the validity, this tool could be compared with other well established current basic communication skills tools. The scores of this tool could be compared to the overall scores of communicaiton skills captured by the program from other sources.
This tool is a valid starting point taking into consideration the lack of rigorous current checklists that measure EMR-related communication skills. This study has proven the validity and reliability of the tool. However, further research and optimization of the form is needed. It is worth re-structuring the form into three sections: basic skills, EMR-related skills and interpersonal skills. As EMRs become more established and standard of care in the future, some items may become obsolete that require a modified shorter form. A large sample with diverse types of residents may be warranted to increase generalizability of the tool. To improve the validity, this tool could be compared with other well established current basic communication skills tools. The scores of this tool could be compared to the overall scores of communicaiton skills captured by the program from other sources.
This checklist is a reliable and valid instrument that combines both basic and EMR-related communication skills. Further research is needed to measure its psychometric properties in practice.
Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
|
European Society of Cardiology quality indicators for the management of patients with ventricular arrhythmias and the prevention of sudden cardiac death
|
a46fb590-6a0b-476e-ba68-c44364def5e1
|
10103575
|
Internal Medicine[mh]
|
Sudden cardiac death (SCD) remains a major healthcare challenge accounting for 10–15% of all deaths in Europe. , Moreover, evidence suggests variation in the implementation of SCD preventive measures within and between countries. This variation calls for the development of new initiatives which may help identify areas for quality improvement in the management of patients with ventricular arrhythmias (VA) and for the prevention of premature deaths. Quality indicators (QIs) are tools that may be used to measure adherence to and the outcomes from the uptake of guideline-recommended therapies. Given that QIs relate to discrete aspects of care, the use of QIs allows more informed interpretation of ‘real-world’ data to help address the ‘second translational gap’. As such, the European Society of Cardiology (ESC) has established suites of QIs for people with and at risk of cardiovascular disease, but until now has not developed QIs for the management of VA and the prevention of SCD. Although performance and quality measures exist for SCD, they predate the current clinical practice guidelines. In parallel to the writing of the 2022 ESC Clinical Practice Guidelines for the management of patients with VA and the prevention of SCD and in collaboration with the European Heart Rhythm Association (EHRA) of the ESC, the QI Working Group for VA and SCD prevention was established to develop the first set of QIs by the ESC for this group of patients. By producing a suite of QIs which align with the current recommendations for the management of patients with VA and the prevention of SCD, it is anticipated that standardized evaluation of guideline adherence will be facilitated, and priority areas identified for quality improvement initiatives.
We used the ESC methodology for the development of QIs for the quantification of cardiovascular care and outcomes. This methodology comprises: (i) the identification of key domains of care for the management of VA and the prevention of SCD by constructing a conceptual framework of care, (ii) the development of candidate QIs by conducting a systematic review of the literature, (iii) the selection of the final set of QIs using a modified-Delphi method, and (iv) the evaluation of the feasibility of the developed QIs. The developed QIs were classified as structural, process or outcome indicators. Structural QIs assess quality of care at the organizational level, process QIs evaluate quality of care at the level of the patient, and outcome QIs capture the outcomes of care delivery. The ESC QIs are categorized as main and secondary indicators with main QIs scoring higher for validity and feasibility. Members of the working group The international Working Group was formed in April 2021 and comprised healthcare professionals with expertise in the management of patients with VA and the prevention of SCD, Task Force members of the respective ESC Clinical Practice Guidelines, members of EHRA, members of the ESC QI Committee and a patient representative. Domains of care Following the formation of the Working Group, the members defined the target population for whom the QIs are applicable as SCD victims, survivors of SCA, and patients with VA or other conditions that are associated with SCD (e.g. primary electrical diseases, inherited disorders, and heart failure with reduced ejection fraction). The Working Group also identified the key domains of the care for the target population by conceptually illustrating the patient journey during the care delivery process ( Figure ). For the process QIs, the Working Group defined the patients who are eligible for the measured care process (denominator), the accomplishment criteria for the QI (numerator), and the time point at which the assessment is performed (measurement period). For the structural QIs, only numerator definitions were provided given these are binary measurements (yes, no) which capture information about the availability of resources and infrastructure. Systematic review Search strategy A systematic review of the published literature was conducted in accordance with the Preferred Reporting Items for Systematic Review and Meta-analyses statement (see , ). We searched two online bibliographic databases, MEDLINE and Embase via OVID®. The initial search strategy was developed in MEDLINE using keywords with a variety of medical subject headings (MeSH) terms (see , ). We included randomized controlled and observational studies, including publications from clinical registries. We included the main publications of the major trials and registries from which our search obtained only sub-studies and reviewed the studies included in the retrieved systematic reviews and meta-analyses against our inclusion criteria. The search was restricted to English language and publication dates between 01 January 2015 and 15 June 2021 given the year 2015 corresponds to the publication of the ESC Clinical Practice Guidelines for VA and SCD. Eligibility criteria We included articles fulfilling the following criteria: (i) the study population was adults (age ≥18 years) with a prior history, family history or an established risk for SCA, (ii) the study defined an intervention (structural or process aspect of care) for which at least one outcome measure was evaluated, (iii) the outcome measures were hard endpoints (e.g. mortality, re-admission) or patient reported outcomes (e.g. quality of life), (iv) the study provided definitions for the intervention and outcome measure(s) evaluated, and (v) the study was a peer-reviewed randomized controlled trial or observational study. Study selection EndNote X9 was used for reference management and for duplicate removal. Three reviewers (S.A., T.R., and S.T.) independently examined the abstracts of the studies retrieved from the search against the inclusion criteria. Disagreements were resolved through discussion and a full text review of the debated article. Quality assessment and data extraction Studies that met the eligibility criteria were included in the initial phase of the review. A broad inclusion was used to ensure that the list of initial (candidate) QIs encompassed the range of care delivery. The full texts of the included articles were reviewed by three authors (S.A., T.R., and S.T.) and for each study both the intervention(s), and the outcome measure(s) evaluated were extracted to an Excel spreadsheet. Definitions of the extracted data items were obtained when provided in the study. Clinical Practice Guidelines, consensus documents, and QIs Existing QIs, consensus documents, and Clinical Practice Guidelines pertinent to the management of VA and the prevention of SCD were reviewed. The Working Group opted not to replicate aspects of care described in previous ESC QI suites. As such, the present document is complementary to published ESC QI documents. The goal of the Clinical Practice Guidelines review was to assess the suitability of their recommendations with the strongest association with benefit and harm (Class I and III, respectively) against the ESC criteria for QIs (see , ). Data synthesis Modified Delphi process We used the modified Delphi method to evaluate the candidate QIs that were derived from the literature. The ESC criteria for QI development (see , ) were shared with the Working Group members prior to the voting in order to guide the selection process. Candidate QIs were graded according to a nine-point ordinal scale for both validity and feasibility by each Working Group member using an on-line questionnaire. Two rounds in total were conducted, with a number of teleconferences after each round to discuss the results of the vote and address any concerns or ambiguities. Analysing voting results Ratings of 1 to 3 were defined as meaning that the QI was not valid/feasible; ratings 4 to 6 that the QI was of an uncertain validity/feasibility; and ratings of 7 to 9 that the QI was valid/feasible. For each candidate QI, the median and the mean deviation from the median were calculated to evaluate the central tendency and the dispersion of the votes. Indicators with median scores ≥7 for validity, ≥4 for feasibility, and with minimal dispersion were included in the final set of QIs. Those QIs meeting the inclusion criteria in the first voting round formed the main QIs and those that met the inclusion criteria after a second round of voting formed the secondary QIs.
The international Working Group was formed in April 2021 and comprised healthcare professionals with expertise in the management of patients with VA and the prevention of SCD, Task Force members of the respective ESC Clinical Practice Guidelines, members of EHRA, members of the ESC QI Committee and a patient representative.
Following the formation of the Working Group, the members defined the target population for whom the QIs are applicable as SCD victims, survivors of SCA, and patients with VA or other conditions that are associated with SCD (e.g. primary electrical diseases, inherited disorders, and heart failure with reduced ejection fraction). The Working Group also identified the key domains of the care for the target population by conceptually illustrating the patient journey during the care delivery process ( Figure ). For the process QIs, the Working Group defined the patients who are eligible for the measured care process (denominator), the accomplishment criteria for the QI (numerator), and the time point at which the assessment is performed (measurement period). For the structural QIs, only numerator definitions were provided given these are binary measurements (yes, no) which capture information about the availability of resources and infrastructure.
Search strategy A systematic review of the published literature was conducted in accordance with the Preferred Reporting Items for Systematic Review and Meta-analyses statement (see , ). We searched two online bibliographic databases, MEDLINE and Embase via OVID®. The initial search strategy was developed in MEDLINE using keywords with a variety of medical subject headings (MeSH) terms (see , ). We included randomized controlled and observational studies, including publications from clinical registries. We included the main publications of the major trials and registries from which our search obtained only sub-studies and reviewed the studies included in the retrieved systematic reviews and meta-analyses against our inclusion criteria. The search was restricted to English language and publication dates between 01 January 2015 and 15 June 2021 given the year 2015 corresponds to the publication of the ESC Clinical Practice Guidelines for VA and SCD. Eligibility criteria We included articles fulfilling the following criteria: (i) the study population was adults (age ≥18 years) with a prior history, family history or an established risk for SCA, (ii) the study defined an intervention (structural or process aspect of care) for which at least one outcome measure was evaluated, (iii) the outcome measures were hard endpoints (e.g. mortality, re-admission) or patient reported outcomes (e.g. quality of life), (iv) the study provided definitions for the intervention and outcome measure(s) evaluated, and (v) the study was a peer-reviewed randomized controlled trial or observational study. Study selection EndNote X9 was used for reference management and for duplicate removal. Three reviewers (S.A., T.R., and S.T.) independently examined the abstracts of the studies retrieved from the search against the inclusion criteria. Disagreements were resolved through discussion and a full text review of the debated article. Quality assessment and data extraction Studies that met the eligibility criteria were included in the initial phase of the review. A broad inclusion was used to ensure that the list of initial (candidate) QIs encompassed the range of care delivery. The full texts of the included articles were reviewed by three authors (S.A., T.R., and S.T.) and for each study both the intervention(s), and the outcome measure(s) evaluated were extracted to an Excel spreadsheet. Definitions of the extracted data items were obtained when provided in the study. Clinical Practice Guidelines, consensus documents, and QIs Existing QIs, consensus documents, and Clinical Practice Guidelines pertinent to the management of VA and the prevention of SCD were reviewed. The Working Group opted not to replicate aspects of care described in previous ESC QI suites. As such, the present document is complementary to published ESC QI documents. The goal of the Clinical Practice Guidelines review was to assess the suitability of their recommendations with the strongest association with benefit and harm (Class I and III, respectively) against the ESC criteria for QIs (see , ).
A systematic review of the published literature was conducted in accordance with the Preferred Reporting Items for Systematic Review and Meta-analyses statement (see , ). We searched two online bibliographic databases, MEDLINE and Embase via OVID®. The initial search strategy was developed in MEDLINE using keywords with a variety of medical subject headings (MeSH) terms (see , ). We included randomized controlled and observational studies, including publications from clinical registries. We included the main publications of the major trials and registries from which our search obtained only sub-studies and reviewed the studies included in the retrieved systematic reviews and meta-analyses against our inclusion criteria. The search was restricted to English language and publication dates between 01 January 2015 and 15 June 2021 given the year 2015 corresponds to the publication of the ESC Clinical Practice Guidelines for VA and SCD.
We included articles fulfilling the following criteria: (i) the study population was adults (age ≥18 years) with a prior history, family history or an established risk for SCA, (ii) the study defined an intervention (structural or process aspect of care) for which at least one outcome measure was evaluated, (iii) the outcome measures were hard endpoints (e.g. mortality, re-admission) or patient reported outcomes (e.g. quality of life), (iv) the study provided definitions for the intervention and outcome measure(s) evaluated, and (v) the study was a peer-reviewed randomized controlled trial or observational study.
EndNote X9 was used for reference management and for duplicate removal. Three reviewers (S.A., T.R., and S.T.) independently examined the abstracts of the studies retrieved from the search against the inclusion criteria. Disagreements were resolved through discussion and a full text review of the debated article.
Studies that met the eligibility criteria were included in the initial phase of the review. A broad inclusion was used to ensure that the list of initial (candidate) QIs encompassed the range of care delivery. The full texts of the included articles were reviewed by three authors (S.A., T.R., and S.T.) and for each study both the intervention(s), and the outcome measure(s) evaluated were extracted to an Excel spreadsheet. Definitions of the extracted data items were obtained when provided in the study.
Existing QIs, consensus documents, and Clinical Practice Guidelines pertinent to the management of VA and the prevention of SCD were reviewed. The Working Group opted not to replicate aspects of care described in previous ESC QI suites. As such, the present document is complementary to published ESC QI documents. The goal of the Clinical Practice Guidelines review was to assess the suitability of their recommendations with the strongest association with benefit and harm (Class I and III, respectively) against the ESC criteria for QIs (see , ).
Modified Delphi process We used the modified Delphi method to evaluate the candidate QIs that were derived from the literature. The ESC criteria for QI development (see , ) were shared with the Working Group members prior to the voting in order to guide the selection process. Candidate QIs were graded according to a nine-point ordinal scale for both validity and feasibility by each Working Group member using an on-line questionnaire. Two rounds in total were conducted, with a number of teleconferences after each round to discuss the results of the vote and address any concerns or ambiguities. Analysing voting results Ratings of 1 to 3 were defined as meaning that the QI was not valid/feasible; ratings 4 to 6 that the QI was of an uncertain validity/feasibility; and ratings of 7 to 9 that the QI was valid/feasible. For each candidate QI, the median and the mean deviation from the median were calculated to evaluate the central tendency and the dispersion of the votes. Indicators with median scores ≥7 for validity, ≥4 for feasibility, and with minimal dispersion were included in the final set of QIs. Those QIs meeting the inclusion criteria in the first voting round formed the main QIs and those that met the inclusion criteria after a second round of voting formed the secondary QIs.
We used the modified Delphi method to evaluate the candidate QIs that were derived from the literature. The ESC criteria for QI development (see , ) were shared with the Working Group members prior to the voting in order to guide the selection process. Candidate QIs were graded according to a nine-point ordinal scale for both validity and feasibility by each Working Group member using an on-line questionnaire. Two rounds in total were conducted, with a number of teleconferences after each round to discuss the results of the vote and address any concerns or ambiguities.
Ratings of 1 to 3 were defined as meaning that the QI was not valid/feasible; ratings 4 to 6 that the QI was of an uncertain validity/feasibility; and ratings of 7 to 9 that the QI was valid/feasible. For each candidate QI, the median and the mean deviation from the median were calculated to evaluate the central tendency and the dispersion of the votes. Indicators with median scores ≥7 for validity, ≥4 for feasibility, and with minimal dispersion were included in the final set of QIs. Those QIs meeting the inclusion criteria in the first voting round formed the main QIs and those that met the inclusion criteria after a second round of voting formed the secondary QIs.
Domains of care In total, eight domains of care for the management of patients with VA and the prevention of SCD were identified by the Working Group. These domains included: (i) structural framework, (ii) screening and diagnosis, (iii) risk stratification, (iv) patient education and lifestyle modification, (v) pharmacological treatment, (vi) device therapy, (vii) catheter ablation, and (viii) outcomes ( Figure ). Quality indicators Systematic review results The literature search retrieved 3,369 articles, of which 107 met the inclusion criteria ( Figure ) and were used to extract 75 candidate QIs for the first Delphi round. Of those, 25 (33%) met the criteria for inclusion as main QIs, 39 (52%) were excluded and 11 (15%) QIs were deemed inconclusive. Following Working Group membership discussion, 8 (32%) of the main QIs were downgraded and subsequently reconsidered in a second Delphi round alongside the inconclusive ones. Thus, a total of 19 QIs were included in the second Delphi round, after which 4 (21%) QIs met the inclusion criteria and were selected as secondary QIs ( Figure ). As such, a total of 17 main and 4 secondary QIs were included in the final set of the 2022 ESC QIs for the management of VA and the prevention of SCD ( Table ). Domain 1: structural framework Organizational components in healthcare centres are important for optimizing the management of patients with VA and those at risk for SCD. Such structural measures are relevant to the standards of care at the institutional level which may impact patient outcomes. In this context, the availability of a dedicated and competent cardiac arrest team that delivers a prompt and high-quality cardiopulmonary resuscitation according to the European Resuscitation Guidelines is an indicator of care quality for SCD prevention (QI 01M01) . The follow-up of patients with cardiac implantable electronic devices (CIED) is an important aspect of care delivery for patients with VA and those at risk of SCD. Remote CIED monitoring has been demonstrated to prevent inappropriate defibrillator shocks and to improve clinical outcomes and thus is a QI of CIED follow up (QI 01M02) . , Domain 2: screening and diagnosis Identifying the underlying aetiology for cardiac arrhythmias is the primary goal not only for preventing further episodes in aborted SCD victims, but also for guiding familial investigation in case of a documented or suspected inherited cardiac disease. The performance of an autopsy for SCD is necessary for the investigation of potential inherited cardiac diseases, particularly in unexplained SCD in young (age < 50 years) individuals. As such, the performance of a comprehensive autopsy including cardiac histopathology and post-mortem genetic testing (also known as the molecular autopsy) targeted to not only primary electrical diseases but also concealed cardiomyopathies, with/without toxicology assessment (e.g. polypharmacy or drug abuse) in this group of patients is an indicator of care quality ( QI 02M01 ). Screening the relatives of those with SCD is recommended to identify asymptomatic individuals at potential risk of lethal arrhythmias due to an inherited cardiac disease. , Having a standardized protocol for such a screening is an indicator of SCD prevention care quality ( QI 02M02 ). In patients with unexplained SCA, pharmacological provocation testing increases the diagnostic yield and is an indicator of care quality ( QI 02M03 ). Advanced imaging modalities such as late gadolinium enhancement (LGE) on cardiac magnetic resonance imaging (cMRI) play a major role in the diagnosis of arrhythmogenic right ventricular cardiomyopathy (ARVC) ( QI 02M04 ). Domain 3: risk stratification Risk assessment may identify individuals at higher risk of VA or SCD and helps determine risk-mitigation strategies, such as pharmacological therapy or implantable cardioverter defibrillator (ICD) implantation. For patients with hypertrophic cardiomyopathy (HCM), the HCM-SCD risk score provides an estimate of 5-year risk of SCD for patients with HCM. This algorithm has been internally and externally validated and improves SCD risk prediction when compared with other prediction models. , Patients with a predicted 5-year risk of SCD ≥ 6% have the highest event rate and the most favourable risk-benefit ratio for ICD implantation. The use of the HCM-SCD risk score therefore forms a QI for the prevention of SCD in patients with HCM ( QI 03M01 ). In addition, LGE-CMR helps identify the presence of fibrosis in patients with HCM and has prognostic implications. Thus, LGE-CMR at the time of initial evaluation has been selected as an indicator of care quality for this group of patients ( QI 03M02 ). Domain 4: patient education and lifestyle modifications Lifestyle habits and physical factors may induce VA in patients with certain types of underlying heart disease. Patient education is recommended to reduce the risk of VA and SCD. Whilst adopting a ‘healthy’ lifestyle including smoking cessation, regular exercise, healthy diet, and weight loss reduces the risk of SCD, specific lifestyle modifications may be needed for certain underlying arrhythmogenic disorders. ARVC is an inherited disease whose progression and clinical course, including VA occurrence, is adversely affected by high-intensity exercise. , Thus, patient counselling on avoidance of vigorous exercise is an essential component of SCD prevention in this group of patients ( QI 04M01 ). For patients with long QT syndrome (LQTS), several triggers have been identified for different types of the disorder. As such, educating patients on the avoidance of those triggers is of paramount importance to reduce the risk of SCD in patients with LQTS. Furthermore, education is essential to reduce modifiable factors, such as QT prolonging medications ( www.crediblemeds.org ) and electrolyte abnormalities ( QI 04M02 ). An ICD/cardiac resynchronization therapy-defibrillator (CRT-D) can affect daily life and mental health. , Having an ICD also incurs sensitive discussions about device deactivation among patients and families. Accordingly, it is recommended that patients with an ICD/CRT-D receive counselling about living with an ICD ( QI 04S01 ). Domain 5: pharmacological treatment Adrenergic activation is a well-documented trigger of VA in patients with congenital LQTS. Beta blockers reduce the burden of syncope and SCD in patients with LQTS. Non-selective beta blockers propranolol and nadolol are even more protective against breakthrough arrhythmic events in LQTS patients. Thus, beta blockers constitute the mainstay of the management of patients with congenital LQTS. Whilst certain types of LQTS may have greater benefit from beta blocker treatment compared with other types, improved outcomes are observed across the whole spectrum of LQTS and is thus an indicator of care quality in this group of patients ( QI 05M01 ). Domain 6: device therapy ICD therapy is considered a primary therapeutic option for the prevention of arrhythmic death. Evidence supports the use of ICD for secondary and primary prevention of SCD in eligible patients. For secondary prevention after cardiac arrest or sustained symptomatic ventricular tachycardia (haemodynamically not tolerated), where no reversible cause is identified, ICD implantation reduces all-cause mortality when compared with medical treatment and is thus a QI for SCD prevention ( QI 06M01 ). For the primary prevention of SCD, the strongest evidence is in favour of device therapy in patients with symptomatic heart failure and a left ventricular ejection fraction ≤ 35% despite ≥ 3 months of optimal medical therapy. For those with non-ischaemic heart failure, data supporting the benefit derived from primary prevention ICD implantation is less robust. Therefore, the working group voted in favour of adopting the proportion of ischaemic cardiomyopathy patients, New York Heart Association class II-III who have a left ventricular ejection fraction ≤35% and ≥ 3 months of optimal medical therapy and a life expectancy > 1 year who receive ICD for primary prevention of SCD as a QI of appropriate device therapy ( QI 06M02 ). Customization of optimal ICD settings is associated with a reduced number of ICD therapies and improved patient outcome. , Programming of prolonged tachyarrhythmia detection settings and high-rate tachycardia detection thresholds is effective in reducing the overall therapy burden, without impairing patient safety among primary prevention ICD recipients. Accordingly, detailed programming recommendations are now available in expert consensus papers. , The proportion of primary prevention ICD recipients whose device is programmed to a prolonged detection strategy and/or high-rate programming strategy is proposed as an indicator of high-quality care ( QI 06M03 ). Domain 7: catheter ablation Despite the efficacy of ICD therapy in terminating VT episodes, the burden of ICD interventions should be minimized because ICD shocks are associated with poorer quality of life and outcomes. , Catheter ablation is an effective intervention in reducing VT recurrences in specific types of VTs with subsequent improvement in survival. Treatment alternatives in ICD recipients experiencing VT recurrences despite antiarrhythmic drug treatment would be either escalation of antiarrhythmic drug or catheter ablation. VT ablation is more effective in reducing recurrent VT episodes and appropriate ICD shocks than antiarrhythmic drug escalation in ischaemic cardiomyopathy with VT despite appropriate first-line antiarrhythmic drugs. Therefore, the proportion of ischaemic cardiomyopathy patients with recurrent, symptomatic sustained monomorphic VT despite chronic amiodarone therapy who receive VT ablation is a QI in provision of catheter ablation therapy ( QI 07M01 ). , Domain 8: outcomes Whilst VT ablation reduces ICD shocks and VT recurrence and has favourable effects on patient outcomes, it may be associated with procedural complications including stroke and death. Morbidity and mortality in the 30 days following VT ablation is not negligible. Notwithstanding that procedural complications or death within 30 days after VT ablation are not necessarily attributable to the procedure per se, but rather to the underlying heart disease or even non-cardiac causes, it remains important to monitor trends in all-cause mortality ( QI 08M01 ) and procedural complications in the first 30 days following VT ablation ( QI 08S03 ). With regards to ICD procedures, complications in the first 30 days after implantation ( QI 08S01 ) and procedure-related infections up to 1 year after all types of ICD implantation ( QI 08S02 ) are QIs. Survival to hospital discharge after out-of-hospital cardiac arrest is determined by several factors including the organization of emergency medical service, bystander CPR-rates, post-resuscitation protocols and provision of long-term care. Survival to hospital discharge is a key indicator for monitoring changes over time within a given system and for comparison across sites ( QI 08M02 ).
In total, eight domains of care for the management of patients with VA and the prevention of SCD were identified by the Working Group. These domains included: (i) structural framework, (ii) screening and diagnosis, (iii) risk stratification, (iv) patient education and lifestyle modification, (v) pharmacological treatment, (vi) device therapy, (vii) catheter ablation, and (viii) outcomes ( Figure ).
Systematic review results The literature search retrieved 3,369 articles, of which 107 met the inclusion criteria ( Figure ) and were used to extract 75 candidate QIs for the first Delphi round. Of those, 25 (33%) met the criteria for inclusion as main QIs, 39 (52%) were excluded and 11 (15%) QIs were deemed inconclusive. Following Working Group membership discussion, 8 (32%) of the main QIs were downgraded and subsequently reconsidered in a second Delphi round alongside the inconclusive ones. Thus, a total of 19 QIs were included in the second Delphi round, after which 4 (21%) QIs met the inclusion criteria and were selected as secondary QIs ( Figure ). As such, a total of 17 main and 4 secondary QIs were included in the final set of the 2022 ESC QIs for the management of VA and the prevention of SCD ( Table ).
The literature search retrieved 3,369 articles, of which 107 met the inclusion criteria ( Figure ) and were used to extract 75 candidate QIs for the first Delphi round. Of those, 25 (33%) met the criteria for inclusion as main QIs, 39 (52%) were excluded and 11 (15%) QIs were deemed inconclusive. Following Working Group membership discussion, 8 (32%) of the main QIs were downgraded and subsequently reconsidered in a second Delphi round alongside the inconclusive ones. Thus, a total of 19 QIs were included in the second Delphi round, after which 4 (21%) QIs met the inclusion criteria and were selected as secondary QIs ( Figure ). As such, a total of 17 main and 4 secondary QIs were included in the final set of the 2022 ESC QIs for the management of VA and the prevention of SCD ( Table ).
Organizational components in healthcare centres are important for optimizing the management of patients with VA and those at risk for SCD. Such structural measures are relevant to the standards of care at the institutional level which may impact patient outcomes. In this context, the availability of a dedicated and competent cardiac arrest team that delivers a prompt and high-quality cardiopulmonary resuscitation according to the European Resuscitation Guidelines is an indicator of care quality for SCD prevention (QI 01M01) . The follow-up of patients with cardiac implantable electronic devices (CIED) is an important aspect of care delivery for patients with VA and those at risk of SCD. Remote CIED monitoring has been demonstrated to prevent inappropriate defibrillator shocks and to improve clinical outcomes and thus is a QI of CIED follow up (QI 01M02) . ,
Identifying the underlying aetiology for cardiac arrhythmias is the primary goal not only for preventing further episodes in aborted SCD victims, but also for guiding familial investigation in case of a documented or suspected inherited cardiac disease. The performance of an autopsy for SCD is necessary for the investigation of potential inherited cardiac diseases, particularly in unexplained SCD in young (age < 50 years) individuals. As such, the performance of a comprehensive autopsy including cardiac histopathology and post-mortem genetic testing (also known as the molecular autopsy) targeted to not only primary electrical diseases but also concealed cardiomyopathies, with/without toxicology assessment (e.g. polypharmacy or drug abuse) in this group of patients is an indicator of care quality ( QI 02M01 ). Screening the relatives of those with SCD is recommended to identify asymptomatic individuals at potential risk of lethal arrhythmias due to an inherited cardiac disease. , Having a standardized protocol for such a screening is an indicator of SCD prevention care quality ( QI 02M02 ). In patients with unexplained SCA, pharmacological provocation testing increases the diagnostic yield and is an indicator of care quality ( QI 02M03 ). Advanced imaging modalities such as late gadolinium enhancement (LGE) on cardiac magnetic resonance imaging (cMRI) play a major role in the diagnosis of arrhythmogenic right ventricular cardiomyopathy (ARVC) ( QI 02M04 ).
Risk assessment may identify individuals at higher risk of VA or SCD and helps determine risk-mitigation strategies, such as pharmacological therapy or implantable cardioverter defibrillator (ICD) implantation. For patients with hypertrophic cardiomyopathy (HCM), the HCM-SCD risk score provides an estimate of 5-year risk of SCD for patients with HCM. This algorithm has been internally and externally validated and improves SCD risk prediction when compared with other prediction models. , Patients with a predicted 5-year risk of SCD ≥ 6% have the highest event rate and the most favourable risk-benefit ratio for ICD implantation. The use of the HCM-SCD risk score therefore forms a QI for the prevention of SCD in patients with HCM ( QI 03M01 ). In addition, LGE-CMR helps identify the presence of fibrosis in patients with HCM and has prognostic implications. Thus, LGE-CMR at the time of initial evaluation has been selected as an indicator of care quality for this group of patients ( QI 03M02 ).
Lifestyle habits and physical factors may induce VA in patients with certain types of underlying heart disease. Patient education is recommended to reduce the risk of VA and SCD. Whilst adopting a ‘healthy’ lifestyle including smoking cessation, regular exercise, healthy diet, and weight loss reduces the risk of SCD, specific lifestyle modifications may be needed for certain underlying arrhythmogenic disorders. ARVC is an inherited disease whose progression and clinical course, including VA occurrence, is adversely affected by high-intensity exercise. , Thus, patient counselling on avoidance of vigorous exercise is an essential component of SCD prevention in this group of patients ( QI 04M01 ). For patients with long QT syndrome (LQTS), several triggers have been identified for different types of the disorder. As such, educating patients on the avoidance of those triggers is of paramount importance to reduce the risk of SCD in patients with LQTS. Furthermore, education is essential to reduce modifiable factors, such as QT prolonging medications ( www.crediblemeds.org ) and electrolyte abnormalities ( QI 04M02 ). An ICD/cardiac resynchronization therapy-defibrillator (CRT-D) can affect daily life and mental health. , Having an ICD also incurs sensitive discussions about device deactivation among patients and families. Accordingly, it is recommended that patients with an ICD/CRT-D receive counselling about living with an ICD ( QI 04S01 ).
Adrenergic activation is a well-documented trigger of VA in patients with congenital LQTS. Beta blockers reduce the burden of syncope and SCD in patients with LQTS. Non-selective beta blockers propranolol and nadolol are even more protective against breakthrough arrhythmic events in LQTS patients. Thus, beta blockers constitute the mainstay of the management of patients with congenital LQTS. Whilst certain types of LQTS may have greater benefit from beta blocker treatment compared with other types, improved outcomes are observed across the whole spectrum of LQTS and is thus an indicator of care quality in this group of patients ( QI 05M01 ).
ICD therapy is considered a primary therapeutic option for the prevention of arrhythmic death. Evidence supports the use of ICD for secondary and primary prevention of SCD in eligible patients. For secondary prevention after cardiac arrest or sustained symptomatic ventricular tachycardia (haemodynamically not tolerated), where no reversible cause is identified, ICD implantation reduces all-cause mortality when compared with medical treatment and is thus a QI for SCD prevention ( QI 06M01 ). For the primary prevention of SCD, the strongest evidence is in favour of device therapy in patients with symptomatic heart failure and a left ventricular ejection fraction ≤ 35% despite ≥ 3 months of optimal medical therapy. For those with non-ischaemic heart failure, data supporting the benefit derived from primary prevention ICD implantation is less robust. Therefore, the working group voted in favour of adopting the proportion of ischaemic cardiomyopathy patients, New York Heart Association class II-III who have a left ventricular ejection fraction ≤35% and ≥ 3 months of optimal medical therapy and a life expectancy > 1 year who receive ICD for primary prevention of SCD as a QI of appropriate device therapy ( QI 06M02 ). Customization of optimal ICD settings is associated with a reduced number of ICD therapies and improved patient outcome. , Programming of prolonged tachyarrhythmia detection settings and high-rate tachycardia detection thresholds is effective in reducing the overall therapy burden, without impairing patient safety among primary prevention ICD recipients. Accordingly, detailed programming recommendations are now available in expert consensus papers. , The proportion of primary prevention ICD recipients whose device is programmed to a prolonged detection strategy and/or high-rate programming strategy is proposed as an indicator of high-quality care ( QI 06M03 ).
Despite the efficacy of ICD therapy in terminating VT episodes, the burden of ICD interventions should be minimized because ICD shocks are associated with poorer quality of life and outcomes. , Catheter ablation is an effective intervention in reducing VT recurrences in specific types of VTs with subsequent improvement in survival. Treatment alternatives in ICD recipients experiencing VT recurrences despite antiarrhythmic drug treatment would be either escalation of antiarrhythmic drug or catheter ablation. VT ablation is more effective in reducing recurrent VT episodes and appropriate ICD shocks than antiarrhythmic drug escalation in ischaemic cardiomyopathy with VT despite appropriate first-line antiarrhythmic drugs. Therefore, the proportion of ischaemic cardiomyopathy patients with recurrent, symptomatic sustained monomorphic VT despite chronic amiodarone therapy who receive VT ablation is a QI in provision of catheter ablation therapy ( QI 07M01 ). ,
Whilst VT ablation reduces ICD shocks and VT recurrence and has favourable effects on patient outcomes, it may be associated with procedural complications including stroke and death. Morbidity and mortality in the 30 days following VT ablation is not negligible. Notwithstanding that procedural complications or death within 30 days after VT ablation are not necessarily attributable to the procedure per se, but rather to the underlying heart disease or even non-cardiac causes, it remains important to monitor trends in all-cause mortality ( QI 08M01 ) and procedural complications in the first 30 days following VT ablation ( QI 08S03 ). With regards to ICD procedures, complications in the first 30 days after implantation ( QI 08S01 ) and procedure-related infections up to 1 year after all types of ICD implantation ( QI 08S02 ) are QIs. Survival to hospital discharge after out-of-hospital cardiac arrest is determined by several factors including the organization of emergency medical service, bystander CPR-rates, post-resuscitation protocols and provision of long-term care. Survival to hospital discharge is a key indicator for monitoring changes over time within a given system and for comparison across sites ( QI 08M02 ).
This document presents the first suite of the ESC QIs for the management of patients with VA and the prevention of SCD. It was developed in collaboration with EHRA and the Task Force of the 2022 ESC guidelines for the management of patients with VA and the prevention of SCD. These 17 main and 4 secondary QIs across 8 domains of care were developed using a standardized methodology that combines evidence with expert judgment, and serve as tools to monitor and improve the management of patients with VA and to reduce the burden of SCD. QIs have gained increased attention in recent years for two reasons. First, they provide tools for assessing, monitoring and reporting the quality of care and associated improvement initiatives within and across healthcare systems. Second, QIs support the adoption of guideline recommendations into clinical practice by translating key messages into specific and measurable QIs. This point has been recognized by the ESC and since 2020 the ESC guidelines have been accompanied by suites of QIs. The present document outlines key aspects for the management of VA and the prevention of SCD. The 2016 American College of Cardiology/American Heart Association (ACC/AHA) performance and quality measures for SCD prevention provided a list of important and feasible interventions, but lacks the inclusion of structural or outcome QIs which are of a particular importance in the context of VA and SCD prevention. In addition, there are no recommendations in the ACC/AHA set for the application of advanced imaging (e.g. LGE-CMR), monitoring (e.g. remote monitoring) or therapeutic (e.g. ablation) technologies for patients at risk of SCD. The QIs defined in this document may stimulate quality assessment and improvement for patients at risk of SCD, but also provide the basis for data collection across different settings. The European Unified Registries on Heart care Evaluation and Randomized Trials (EuroHeart) project, incorporates the ESC QIs for cardiovascular disease into its international registries so that standardized ‘real-world’ data and performance may be described, and care improved. Furthermore, the QI of autopsy following a sudden unexplained death addresses the extreme heterogeneity and inequality of access across Europe. A recent survey of the EHRA Research Network and European Reference Network GUARD-Heart conducted by the Scientific Initiatives Committee and the European Cardiac Arrhythmia Genetic Focus Group of EHRA, indicated that on average, an autopsy was performed in 43% of suitable cases: 39% of respondents stated that autopsy rates were between 50% and 100%; 23% reported a rate between 25% and 49%; 31% a rate from 1% to 24%; and 7% stated that no autopsy is usually undertaken. The main reason for low autopsy rates was the lack of legal mandate which requires a Europe-wide public health initiative that this QI will measure. The selection of the developed QIs was structured according to the ESC methodology for QI development. The conduction of a systematic review of the literature and the involvement of a far-reaching Working Group ensured that the selected set of QIs are valid measures of care quality which are also feasible and relevant to existing gaps in care delivery. There are limitations of our work which merit consideration. The target population for these QIs was broad and included patients at risk for SCD, as well as victims of SCD and their family members. As such, the Working Group prioritized key aspects of care delivery across the whole spectrum of SCD prevention and avoided replicating relevant QIs that have recently been covered in other suites of the ESC QIs, such as these for heart failure, cardiovascular disease prevention and cardiac pacing. , , Some of the QIs relate to care that is not available in some areas of Europe (e.g. cMRI and specialist pathology). Even so, the majority of the Working Group agreed upon these measures so that they may be used in advocacy for changes in healthcare delivery. The ESC methodology used to develop the QIs relied on expert opinion, and this may have influenced the results. To minimize a bias: (i) a systematic literature review was performed as a basis for QI development; (ii) the subsequent modified Delphi method for selection of the final set of QI followed a standardized process ; and (iii) the members of the working group included experts in cardiac electrophysiology, cardiomyopathies, channelopathies, general cardiologist, patient representatives as well as individuals with expertise in the development of QI, and all voted independently during the Delphi process. We recommend that the QI suite is evaluated and refined as new evidence becomes available.
This document defines 17 main and 4 secondary QIs across eight domains of care for the management of patients with VA and for the prevention of SCD. The QIs span the breadth of the care delivery for individuals at risk of SCD and provide a framework for quality improvement initiatives aiming to improve quality of care and outcomes for the management of VA and prevention of SCD.
is available at Europace online.
euac114_Supplementary_Data Click here for additional data file.
|
The implications of a cost-of-living crisis for oral health and dental care
|
fb53b8a8-9d60-4774-8758-e4300d38ad11
|
10103663
|
Dental[mh]
|
' Please sir, my brother went to stay with my dad last night and he took the toothbrush' . This explanation was offered by a young girl to an author of this paper (IGC) as an explanation as to why her oral hygiene was less than ideal, in spite of a long discussion on the topic at a previous dental visit. The child was being totally honest. It said everything about her family and personal circumstances. This incident occurred more than 30 years ago but its impact was such that it has been used when teaching successive cohorts of dental students in the time since. It is a stark reminder that many who we care for do not live in the same circumstances as us. To think that there might only be one, or no, toothbrushes in a home comes as a shock to many dental students. Dental caries can be a disease of poverty and poor oral health is significantly related to social and economic disadvantage. Much has been written in the pages of this journal, , , and its sister publications in recent months, , , about the impact of the current economy on the dental profession and dentistry, but have we thought sufficiently about how the current cost-of-living crisis is impacting society and the patients that we are here to care for? The basics of securing oral health are: Brush your teeth twice a day with a fluoride-containing toothpaste Reduce both the amount and frequency of free-sugar consumption Visit your dentist regularly. These simple actions are currently in peril for many people. This article discusses the cost-of-living crisis from the perspective of people living in poverty and the impact that it is likely to have on their access to dental care and their oral health.
The term 'hygiene poverty' has received attention in relation to menstrual health , but what about those who cannot afford the personal products needed to maintain their oral health? What if your personal circumstances and disposable income are such that you cannot afford to buy toothbrushes and toothpaste for your children, or that the toothbrush has to be shared? The Hygiene Bank defines hygiene poverty as: 'not being able to afford many of the everyday hygiene and personal grooming products most of us take for granted'. Hygiene poverty occurs when a person's household income is such that they face a choice between paying the rent, heating their home, eating, or keeping themselves clean. Charities are reporting families asking for toiletries such as toothbrushes. A recent survey conducted by YouGov estimated that 3,150,000 adults in the UK - 6.5% of the population - are currently experiencing hygiene poverty. Of a sample of 2,006 people experiencing hygiene poverty, 28% said that they had gone without toothpaste, toothbrushes or essential dental products. Hygiene essentials were reported as being bottom of the list when budgets were tight. As is often the case, vulnerable groups are disproportionately affected by hygiene poverty - individuals from minority ethnic groups and those with a disability or long-term health condition are more likely to report hygiene poverty. There is some evidence that providing families with young children with toothbrushes and fluoride toothpaste (within a multicomponent programme) result in overall cost savings to a healthcare service. , Oral health improvement programmes, such as Designed to Smile (Wales) and Childsmile (Scotland), recognise that implementing supervised toothbrushing in schools will not achieve their full potential if children do not have access to the wherewithal to brush their teeth at home. For this reason, packs containing toothbrushes and toothpaste for home use are delivered as part of these schemes. We know that the improvements that have been seen in oral health in the UK over the past four decades are in large part due to twice daily use of fluoridated toothpaste. If those who are most susceptible to dental caries can no longer afford a toothbrush and toothpaste, then inequalities in oral health can only widen.
The Food and Agriculture Organisation of the United Nations defines food insecurity as a 'lack of regular access to enough safe and nutritious food for normal growth and development and an active and healthy life'. While food insecurity is mostly associated with the developing world, moderate or severe food insecurity also exists in high-income countries. Approximately 8% of the population in North America and Northern Europe - around 88 million people - were food insecure in 2017-2019. Unsurprisingly, higher rates of food insecurity are more prevalent in households of lower socioeconomic position, in disadvantaged communities, and among lower-income households. However, poverty (defined as 60% of the median equivalised net household income) and food insecurity are not synonymous. One-fifth of individuals in poverty are food insecure, compared to 4% of individuals not in poverty. Children in poverty are the most likely to be suffering from food insecurity and families consisting of single adults with children in poverty are particularly vulnerable. Comparatively, pensioners in poverty are the least likely to be food insecure. Lower-income households spend a higher percentage of their budget on food. The average UK household spends 11% of their weekly budget on food, while for the lowest 20% of households by equivalised income, this is closer to 15%. Living in poverty is expensive. Examples of the 'poverty premium' include the use of pre-paid utility meters, dearer insurance policies and more expensive credit. Food is also typically more affordable when bought in bulk, but what happens if you don't have the facilities to refrigerate or freeze food or can't afford to turn on your oven? The inability of low-income households to access the best deals for food and services exacerbates pre-existing inequalities in society. Food insecurity is not only about being able to afford enough food, but also being able to afford food that is nutritious. The dietary quality of food purchased by food-insecure households is lower than that of food-secure households. There is a consistent inverse association between food insecurity and intake of nutrient-rich foods, such as fruit and vegetables. Similarly, consumption of energy-dense foods, such as high-fat dairy products, salty snacks, and sugar-sweetened beverages, is higher among food-insecure households. , There is also evidence of a strong, consistent and dose-response relationship of food insecurity, with lower vegetable intake among children aged 1-5 years, and strong and consistent evidence of higher added sugar intake among food-insecure children aged 6-11 years, compared with food-secure children. Of specific relevance at the present time, analysis of food bank parcels distributed in Oxfordshire found that they exceeded energy requirements and provided disproportionately high sugar and carbohydrates compared to UK guidelines. Foodbanks, which act to alleviate food insecurity, are now a feature of most communities in the UK, and while they play an important role in preventing people going hungry, evidence suggests that they will not make reducing dietary sugar intake and compliance with nutritional guidelines any easier. The increasing prevalence of food insecurity leading to poorer health outcomes becomes a stubborn cycle leading to chronic disease and adverse quality of life.
Access to NHS dental care and the difficulties therein have in recent months received endless attention in the broadcast, print, social and specialist dental media. In the latter, the attention has most commonly focused on the difficulties facing dental providers as they struggle with the aftermath of the COVID-19 pandemic and the shortcomings of NHS funding and contracting arrangements. Less attention has been paid to how the cost-of-living crisis has impacted on patients. Dentistry can offer more than the State can afford to pay for - dental implants and tooth whitening being just two examples. As a result, a two-tier system in the provision of dental care has existed for a long time. It is simply a fact of life that not everyone can be provided with, or afford, 'high-end' treatments. Patients not being able to afford what they would ideally like from dental care, or anxiety about finding out how much dental care would cost in advance of attending, has long been an issue. However, the current cost-of-living crisis means we are now experiencing an era where more patients may not be able to afford even basic NHS dental care. While there is no patient charge for those who are in receipt of certain state benefits, and an NHS low-income scheme that will assist some low earners, as always in any means tested system, it is those who just fail to qualify that are likely to be worst affected. In recent months, the press has been rife with stories of those who have resorted to do-it-yourself dentistry, , sometimes attributed to the inability to find a dentist, or inability to pay for care. A case headlined by the BBC - 'I had to choose between heating or my teeth' - reported on a patient opting to pay £50 to have her tooth extracted rather than paying £1,000 for a root-filling and crown to save the tooth due to the energy crisis. The Money and Pensions Service, an arm's-length body sponsored by the Department of Work and Pensions, recently commissioned a survey which claimed that one in six adults in the UK - nine million people - have no savings. Another five million have less than £100 in savings. Consider these findings in light of the cost of dental care. Even if provided via the NHS, it is easy to see the dilemma that those most likely to experience a dental emergency are likely to find themselves in. The establishment of urgent treatment centres may go some way to alleviating access issues, but if these are a distance from people's homes, can they afford the costs to travel there, whether reliant on public transport or needing to buy fuel to travel by car? 'Visiting your dentist regularly' is an unaffordable expense for many of those in our society who would most benefit from such a visit.
A charity has the tagline: 'the opposite of poverty is not wealth, the opposite of poverty is enough' . This leads to one final consideration in relation to the present cost-of-living crisis; this time, it is not concern for patients, but for staff. Dental nurses are essential to the success of a dental practice, yet Sellars, commenting on the largest sector of the dental workforce, said dental nurses feel 'overworked, undervalued and underpaid'. Perhaps it is not only the person in your dental chair that is struggling to heat their home or feed their children. It may also be true of the person sitting on the other side of the chair. Being in work is no longer a defence against poverty. In a recent publication, the highly regarded Joseph Rowntree Foundation stated that around two-thirds (68%) of working-age adults in poverty live in a household where at least one adult is in work. Since 2011/12, the employment sector which has seen the greatest increase in poverty for those in work is the human health and social care sector. As of November 2022, the UK national living wage (for those aged 23 and over) is £9.50 per hour. This equates to an annual full-time salary of between £17,290-23,712, depending on the exact hours worked. However, it is argued that the national living wage provides insufficient resource to facilitate the opportunities and choices necessary to participate in society. Instead, the Joseph Rowntree Foundation propose a minimum income standard; a public consensus on the financial resource that households need in order not just to survive, but to live with dignity. For a single person in 2022, this was £25,500, and for a single parent with two young children, £38,400. In contrast, the most recent salary review by the British Association for Dental Nurses reported that 73% of dental nurses earned under £20,000 per annum. Two-thirds of dental nurses responding worked full-time. The majority live with partners/spouses and their children and 31% claimed to be the primary earner in the household. Further, 16% of dental nurses said that they had a second job and just under half of those reported that their second job was necessary to meet basic needs. One response to disparity between dental nurses' salaries and cost-of-living may be the trend towards agency nursing or self-employment. Of the 65% of dental nurses responding to a 2020 survey who reported considering leaving the profession, pay was the most commonly cited factor. When training, registration, indemnity and continuing professional development costs are also considered, it's perhaps not surprising why alternative employment opportunities outside dentistry are a rational financial decision for some dental nurses and their families. It is beyond the experience of the authors of this article to discuss the complexities of practice ownership and employee pay, particularly within the fixed financial envelope of NHS practice. This is, however, an opportunity to call for wider recognition of our lowest-paid colleagues and to highlight the moral responsibility we have to ensure that those employed in dentistry have the means by which to live in dignity and fully participate in society.
As everyday costs continue to rise, many of our patients and the communities which we serve are likely to experience difficulties securing the basics to achieve good oral health. This impact will not be felt equally. Targeted support is needed for those most at risk of experiencing food insecurity, hygiene poverty and financial barriers to dental care. However, in an already over-stretched health and social care system and fragmented state-benefits structure, it seems more likely than ever that these individuals will fall through the gaps. Short-term government assistance and the services of third-sector organisations can only go so far in off-setting rising prices for some of the most vulnerable households and does nothing to improve the forecast for those currently struggling to get by day-to-day.
|
COVID-19 self-isolation patterns in UK dental care professionals from February to April 2020
|
3cc647eb-5f0b-407b-934a-c4b0e35bdc49
|
10103668
|
Dental[mh]
|
The UK experienced unparalleled disruption to dental activity in Spring 2020. On 25 March 2020, in response to the escalating COVID-19 pandemic, the Office of the Chief Dental Officer (OCDO) issued its third regular update to general practitioners and community dentists. The update directed that dental teams cease 'all routine, non-urgent dental care until advised otherwise'. Patients with urgent needs were to be managed remotely via telephone triage and 'whenever possible, [treated] with advice, analgesia, [and] antimicrobial means where appropriate.' Dental conditions which could not be managed through these means were to be referred to urgent dental care hubs. In the week preceding the OCDO update, chief dental officers from the other home nations had made similar recommendations. Dental teams were advised wherever possible to reduce the number of routine examinations and aerosol generating procedures. These March 2020 recommendations contrasted the first NHS England Standard Operating Procedure (SOP). The 27 February SOP determined 'most patients presenting in primary dental care settings are unlikely to have COVID-19' and that a 'possible case of COVID-19 needs both clinical symptoms and travel history or contact with a confirmed case'. The basis for policy change was likely a combination of factors: 1) the announcement the UK was moving from contain phase and into delay; 2) the Public Health England recommendation that people identified as 'clinically extremely vulnerable' strictly observe shielding measures; 3) that dental teams may be at increased risk of infection; and 4) dental care itself may be a route for transmission of SARS-CoV-2 within the community. In the absence of widespread testing of SARS-CoV-2 infection in the dental workforce, it was believed a simple self-reporting survey could capture self-isolation patterns in dental care professionals (DCPs). It was hoped that such a survey could provide useful epidemiological data to policymakers when considering the risks of COVID-19 transmission within dental settings. Electronic surveys are straightforward for respondents, mitigate the risk of data loss and facilitate data transfer and analysis. , Web-based platforms have been used previously to capture data from healthcare professionals. , , ,
A web-based closed questionnaire via the Survey Monkey platform (SurveyMonkey Inc, San Mateo, California, USA, www.surveymonkey.com ) captured reported COVID-19 self-isolation patterns in the dental team between 10-17 April 2020. The survey was openly distributed through messaging apps and dental social media channels (see the online Supplementary Information). The original data were stored on the survey platform in accordance with their privacy and security policies ( https://www.surveymonkey.co.uk/mp/legal/security/ ) ( https://help.surveymonkey.com/en/policy/surveymonkey-data/ ). Access was password-restricted to one of the authors (AH), who validated the data before removing potential identifiers (General Dental Council [GDC] numbers and IP addresses). It was not possible to identify participants from the working data set. A list of COVID-like symptoms was developed by combining the information available at that time , ( ). A total of 3,309 responses were collected. All incomplete responses were discarded. Only responses from UK dental professionals were evaluated. The GDC number was screened and rejected where the format did not match the registrant type (however, GDC numbers were not individually validated). Responses were discarded where GDC numbers were duplicated, or where the date for self-isolation predated the first confirmed UK COVID-19 case. The 2,888 (87.3%) valid responses remaining for evaluation represents 2.55% of dental professionals registered as of 31 December 2019 (113,439). Results were analysed in GNU PSPP (GNU PSPP for GNU/Linux, version 1.2.0-g0fb4db, Boston, MA, Free Software Foundation, www.gnu.org/software/pspp/ ). The data from the survey are presented through descriptive statistics. Prior to data collection, an assessment was made of the need for ethical review. The survey was found to be exempt from UK ethical approval. The Health Research Authority's (HRA's) Does my project require review by a research ethics committee? document was consulted in conjunction with the HRA online decision tool. The proposed survey was identified as research. At stage two, assessment for each country within the UK was made using the online decision tool 'Do I need NHS REC review?' The online tool established that the survey did not meet the necessary requirements for ethical approval. The UK HRA document Standard operating procedures for research ethics committees was additionally consulted. The proposed survey did not meet the necessary requirements for ethical review: the research only involved staff of healthcare services (by virtue of their professional role). Further reference to Governance arrangements for research ethics committees confirmed the proposed survey did not meet the necessary criteria for ethical review. The study participants had to enter their GDC number on the landing page as part of the confirmation that they were willing to take part in the survey and to ensure that duplicate entries were not received. Consent to participate was implied by completion of the survey. IP addresses were used on a temporary basis to ensure the integrity of the data, but these were not stored. A statement explaining that each participant's survey data would be anonymised was made. Box 1 COVID-like symptoms used in the survey Fever Cough Shortness of breath and breathing difficulties Anosmia Muscle or joint aches Headache Tiredness
Of the 2,888 valid responses, 0.6% (18) were clinical dental technicians (CDTs), 22.9% (661) dental hygienists, 10.8% (313) dental nurses, 1.3% (38) dental technicians, 18.2% (526) dental therapists, 45.7% (1,321) dentists including specialists, 0.3% (8) oral and maxillofacial surgeons and 0.1% (3) orthodontic therapists. The mean number of days dental professionals reported treating patients before the current pandemic was 4.03 (standard deviation [SD] = 1.07). On a typical pre-pandemic day, 2.1% (61) of dental professionals reported that they did not normally see any patients, 1.7% (50) typically saw between 1-5 patients, 11.7% (339) saw 5-10 patients, 59.7% (1,725) between 10-20 patients, and 24.7% (713) saw more than 20 patients each day. Self-isolation patterns In total, 26.8% (775) of respondents reported that they self-isolated due to COVID-19. Of these, 31.2% (242) reported they did so because they were suffering from the symptoms associated with COVID-19 (COVID-like symptoms), 21.3% (165) did so in order to protect or shield a vulnerable member of their household, 25.7% (199) did because a member of their household was suffering from COVID-like symptoms, and 21.8% (169) self-isolated in order to protect or shield themselves ( ). Of those who self-isolated in this survey, 15.5% (120) did so for between 1-7 days, 41.9% (325) 8-14 days, and 42.6% (330) for 15 or more days. The proportion of each registrant type who identified themselves as shielded or vulnerable was: CDTs = 16.7% (3); dental hygienists = 6.2% (41); dental nurses = 6.4% (20); dental technicians = 7.9% (3); dental therapists = 7.6% (40); dentists = 4.7% (62); and 0.0% of oral/maxillofacial surgeons and orthodontic therapists. Further information relating to household size was gathered from those who reported they self-isolated. For respondents who reported self-isolation for any reason, mean household size was 3.15 (SD = 1.24; n = 775). For those who reported COVID-like symptoms, mean household size was 3.02 (SD = 1.28; n = 242). Where the dental professional reported a household member having COVID-like symptoms, mean household size was 3.52 (SD = 1.06; n = 199). For those who self-isolated to protect a vulnerable member of their household, mean household size was 3.4 (SD = 1.4; n = 165). For dental professionals who self-isolated because they identified as vulnerable, mean household size was 2.64 (SD = 1.03; n = 169). displays the 95% confidence intervals (CIs) for each of these groups. Where the dental professional self-isolated, supplementary information was gathered regarding the total number of household members who currently were, or had been suffering from, COVID-19 symptoms. For all groups who self-isolated, mean number of household members reporting COVID-like symptoms was 1.05 (SD = 1.17; n = 775). For dental professionals with COVID-like symptoms, the mean number of household members with COVID-like symptoms was 1.81 (SD = 1.11; n = 242). Where the dental professional reported a household member having COVID-like symptoms, mean household members with symptoms was 1.63 (SD = 1.03; n = 199). For those who self-isolated to protect a vulnerable member of their household, the mean symptomatic household members was 0.19 (SD = 0.53; n = 165). For dental professionals who self-isolated because they identified as vulnerable, the mean symptomatic members of the household was 0.11 (SD = 0.44; n = 169). displays the 95% CIs for each of the groups. Moreover, 89.5% (299) of households which did not report any household members with COVID-like symptoms at the time of self-isolation remained free of occupants with COVID-like symptoms; 10.5% (35) of households did not. Description of the symptoms The proportion of each registrant type who reported they self-isolated due to COVID-like symptoms were: CDTs = 5.6% (1); dental hygienists = 6.5% (43); dental nurses = 6.7% (21); dental technicians = 5.3% (2); dental therapists = 9.9% (52); dentists = 9.1% (120); oral or maxillofacial surgeons = 37.5% (3); and orthodontic therapists 0.0% (0). The symptoms reported by these dental professionals and their household members can be found in . In the current survey, only 2.9% (7) of dental professionals who reported they self-isolated because they were suffering from COVID-like symptoms also reported they had been tested for the disease. Of these, 42.9% (3) tested positive. Furthermore, 8.5% (17) of dental professionals who self-isolated due to a household member suffering from COVID-like symptoms also reported that their household member had been tested for the disease. Of those household members tested, 52.9% (9) tested positive. Dental aerosol Of the respondents who self-isolated because they were suffering from COVID-like symptoms, 96.7% (234) considered themselves routinely exposed to dental aerosol. Of those who did not self-isolate or self-isolated for other reasons, 96.5% (2,553) considered themselves routinely exposed to dental aerosol. Date of self-isolation Frequency polygons displaying the incidence of self-isolation over time can be found in , , and . The first dental professional self-isolating in relation to COVID-19 in this survey did so on 10 February 2020. The frequency of self-isolation increases in all groups from 10 March 2020 and decreases following the national lockdown on 23 March 2020.
In total, 26.8% (775) of respondents reported that they self-isolated due to COVID-19. Of these, 31.2% (242) reported they did so because they were suffering from the symptoms associated with COVID-19 (COVID-like symptoms), 21.3% (165) did so in order to protect or shield a vulnerable member of their household, 25.7% (199) did because a member of their household was suffering from COVID-like symptoms, and 21.8% (169) self-isolated in order to protect or shield themselves ( ). Of those who self-isolated in this survey, 15.5% (120) did so for between 1-7 days, 41.9% (325) 8-14 days, and 42.6% (330) for 15 or more days. The proportion of each registrant type who identified themselves as shielded or vulnerable was: CDTs = 16.7% (3); dental hygienists = 6.2% (41); dental nurses = 6.4% (20); dental technicians = 7.9% (3); dental therapists = 7.6% (40); dentists = 4.7% (62); and 0.0% of oral/maxillofacial surgeons and orthodontic therapists. Further information relating to household size was gathered from those who reported they self-isolated. For respondents who reported self-isolation for any reason, mean household size was 3.15 (SD = 1.24; n = 775). For those who reported COVID-like symptoms, mean household size was 3.02 (SD = 1.28; n = 242). Where the dental professional reported a household member having COVID-like symptoms, mean household size was 3.52 (SD = 1.06; n = 199). For those who self-isolated to protect a vulnerable member of their household, mean household size was 3.4 (SD = 1.4; n = 165). For dental professionals who self-isolated because they identified as vulnerable, mean household size was 2.64 (SD = 1.03; n = 169). displays the 95% confidence intervals (CIs) for each of these groups. Where the dental professional self-isolated, supplementary information was gathered regarding the total number of household members who currently were, or had been suffering from, COVID-19 symptoms. For all groups who self-isolated, mean number of household members reporting COVID-like symptoms was 1.05 (SD = 1.17; n = 775). For dental professionals with COVID-like symptoms, the mean number of household members with COVID-like symptoms was 1.81 (SD = 1.11; n = 242). Where the dental professional reported a household member having COVID-like symptoms, mean household members with symptoms was 1.63 (SD = 1.03; n = 199). For those who self-isolated to protect a vulnerable member of their household, the mean symptomatic household members was 0.19 (SD = 0.53; n = 165). For dental professionals who self-isolated because they identified as vulnerable, the mean symptomatic members of the household was 0.11 (SD = 0.44; n = 169). displays the 95% CIs for each of the groups. Moreover, 89.5% (299) of households which did not report any household members with COVID-like symptoms at the time of self-isolation remained free of occupants with COVID-like symptoms; 10.5% (35) of households did not.
The proportion of each registrant type who reported they self-isolated due to COVID-like symptoms were: CDTs = 5.6% (1); dental hygienists = 6.5% (43); dental nurses = 6.7% (21); dental technicians = 5.3% (2); dental therapists = 9.9% (52); dentists = 9.1% (120); oral or maxillofacial surgeons = 37.5% (3); and orthodontic therapists 0.0% (0). The symptoms reported by these dental professionals and their household members can be found in . In the current survey, only 2.9% (7) of dental professionals who reported they self-isolated because they were suffering from COVID-like symptoms also reported they had been tested for the disease. Of these, 42.9% (3) tested positive. Furthermore, 8.5% (17) of dental professionals who self-isolated due to a household member suffering from COVID-like symptoms also reported that their household member had been tested for the disease. Of those household members tested, 52.9% (9) tested positive.
Of the respondents who self-isolated because they were suffering from COVID-like symptoms, 96.7% (234) considered themselves routinely exposed to dental aerosol. Of those who did not self-isolate or self-isolated for other reasons, 96.5% (2,553) considered themselves routinely exposed to dental aerosol.
Frequency polygons displaying the incidence of self-isolation over time can be found in , , and . The first dental professional self-isolating in relation to COVID-19 in this survey did so on 10 February 2020. The frequency of self-isolation increases in all groups from 10 March 2020 and decreases following the national lockdown on 23 March 2020.
In this survey of UK-registered dental professional self-isolation patterns during the early phase of the COVID-19 pandemic, 8.4% of dental professionals self-isolated due to COVID-like symptoms and 6.9% did so because a member of their household had symptoms. provides a comparison of the current study with others conducted during the early stages of the COVID-19 pandemic. Similar to other studies conducted during the early phase of the pandemic, this study found that levels of dental professionals reporting COVID-like symptoms were comparable with the estimated community infection rate. For the UK, based on serological sampling, the Office for National Statistics estimated that, on 24 May 2020, 6.78% (95% CI: 5.21; 8.64) of the population had COVID-19 antibodies. The current study, based on self-reported COVID-like symptoms, estimates that 8.4% (95% CI: 6.6; 10.2%) of UK dental professionals experienced COVID-like symptoms before 17 April 2020. The pattern of self-isolation in those who reported COVID-like symptoms are less erratic than those in other groups. This most likely is due to the fact that the dental professional is fully conversant with their own symptoms. dental professionals who reported they self-isolated due to COVID-like symptoms did so earlier than the other groups. Peak frequency for those who reported they self-isolated due to COVID-like symptoms was on 16 March (n = 22). For those who self-isolated for other reasons, the peak frequency for self-isolation occurs seven days later, at the point of the national lockdown. The mean household size for dental professionals who self-isolated due a member of their household displaying COVID-like symptoms, or self-isolated to protect a member of their household, was generally larger than the mean household size of those who self-isolated due to individually suffering from COVID-19 symptoms, or individually identified as vulnerable ( ). This may simply reflect that due to their increased size, larger households could be expected to contain at least one household member who identifies as vulnerable. Similarly, by virtue of their size, larger households may suffer from double jeopardy: they are more likely to have at least one member who is infected, and additionally contain more potential vectors and opportunities for the disease to enter the household. Smaller households are customarily associated with younger and older adults. The latter are associated with greater prevalence of multi-morbidity; accordingly, they could be expected to more readily identify as vulnerable. Aside from at a nation level, this survey neglected to record respondents' geographical location. Accordingly, it was unable to map self-isolation patterns to recognised zones showing elevated rates of infection. Future surveys may be able to improve upon this by recording location information. As a proportion of those registered, the number of respondents for the following registration types was below 1%: dental nurses, dental technicians and orthodontic therapists. Future surveys need to find ways of engaging these vital members of the dental workforce.
According to this survey, 8.4%, (95% CI: 6.6; 10.2) of UK registered dental professionals self-isolated before 14 April 2020 due to COVID-like symptoms. It is likely that a number these symptomatic dental professionals did not have COVID-19. Equally, it is probable that infected dental professionals did not self-isolate due to lack of COVID-like symptoms and were not identified by this survey. This survey is limited by virtue of its self-reporting nature. Ordinarily, survey questionnaires should be corroborated by alternative research methods to ensure the information they capture authentically represents the data which they purport to evaluate. Additionally, self-reporting questionnaires require verification to confirm they generate reliable results over time. However, the circumstances in Spring 2020 did not allow sufficient time for the normal processes of acceptance to occur. The data were uploaded onto a pre-print server on 29 April 2020. Despite the lack of other available data during the earliest phase of the pandemic, it is not known if the national, strategic policymakers considered the data from this survey to inform decision-making. If future policymakers are to make well-informed decisions about risk, it is critical that wide-scale systems are already in place to collect data during the earliest phases of a pandemic in order to rapidly understand the risks to DCPs. It is difficult to understand why there was no organised national effort to achieve this. It cannot be assumed that testing regimens for new pathogens will be available from day-zero.
Supplementary Information (PDF 100KB)
|
Microbes and hydrothermal environments: An annotated selection of World Wide Web sites relevant to the topics in environmental microbiology
|
a3501566-ed4c-4620-b133-a0ba7029c386
|
10103755
|
Microbiology[mh]
|
https://ocean.si.edu/ecosystems/deep‐sea/microbes‐keep‐hydrothermal‐vents‐pumping This article written by Smithsonian Ocean staff provides a good overview of hydrothermal vent communities suitable for non‐scientific readers.
This article written by Smithsonian Ocean staff provides a good overview of hydrothermal vent communities suitable for non‐scientific readers.
https://www.mdpi.com/2075‐163X/11/12/1324 This report reviews the microbial diversity and microbe‐mineral interactions in deep‐ocean hydrothermal environments.
This report reviews the microbial diversity and microbe‐mineral interactions in deep‐ocean hydrothermal environments.
https://www.nature.com/articles/s43705‐021‐00031‐1 This report dealt with the rich taxonomic diversity in vent communities, with a particular focus on those within the tubeworm Ridgeia piscesae .
This report dealt with the rich taxonomic diversity in vent communities, with a particular focus on those within the tubeworm Ridgeia piscesae .
https://microbiomejournal.biomedcentral.com/articles/10.1186/s40168‐020‐00851‐8 This article covered microbial succession during a change from active to inactive deep‐sea sulfide chimneys.
This article covered microbial succession during a change from active to inactive deep‐sea sulfide chimneys.
https://www.nature.com/articles/s41579‐019‐0160‐2 This paper is a broad overview of microbial population at deep‐sea areas containing hydrothermal vents.
This paper is a broad overview of microbial population at deep‐sea areas containing hydrothermal vents.
https://astrobiology.nasa.gov/news/the‐gain‐and‐loss‐of‐genes‐at‐hydrothermal‐vents/ This page provides a summary of a published paper on gene transfer in deep‐sea hydrothermal vent bacteria. It contains a link to the original article.
This page provides a summary of a published paper on gene transfer in deep‐sea hydrothermal vent bacteria. It contains a link to the original article.
https://www.science.org/doi/10.1126/science.229.4715.717 This article by Jannasch and Mottl is one of the classics in the field of hydrothermal vent chemistry and microbiology.
This article by Jannasch and Mottl is one of the classics in the field of hydrothermal vent chemistry and microbiology.
https://www.youtube.com/watch?v=1LrcTa0dDmw There are numerous videos on life in deep‐sea environments. This is one of the longest and most comprehensive of those.
There are numerous videos on life in deep‐sea environments. This is one of the longest and most comprehensive of those.
https://microbewiki.kenyon.edu/index.php/Deep_sea_vent This page of the Microbe Wiki describes deep sea vent properties, global locations, bacterial communities, and eukaryotic organisms present in those environments.
This page of the Microbe Wiki describes deep sea vent properties, global locations, bacterial communities, and eukaryotic organisms present in those environments.
https://www.pnas.org/doi/10.1073/pnas.0503674102 This report describes the isolation and characterization of a green sulfur bacterial species from a deep‐ocean hydrothermal vent that is proposed to use geothermal radiation as a source of photons to be absorbed by its photosynthetic pigments.
This report describes the isolation and characterization of a green sulfur bacterial species from a deep‐ocean hydrothermal vent that is proposed to use geothermal radiation as a source of photons to be absorbed by its photosynthetic pigments.
https://www.frontiersin.org/articles/10.3389/fmicb.2018.02873/full This article reviews knowledge of, and techniques for studying, hydrogen uptake and hydrogen evolving enzymes involved in hydrogen cycling at hydrothermal vents.
This article reviews knowledge of, and techniques for studying, hydrogen uptake and hydrogen evolving enzymes involved in hydrogen cycling at hydrothermal vents.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8269205/ This study samples several deep‐sea hydrothermal vents to sequence viruses that were present. The general conclusions were that diverse viruses were present with different assemblages that are highly localized to specific areas and hosts.
This study samples several deep‐sea hydrothermal vents to sequence viruses that were present. The general conclusions were that diverse viruses were present with different assemblages that are highly localized to specific areas and hosts.
|
Microbial global transport: An annotated selection of World Wide Web sites relevant to the topics in environmental microbiology
|
ad06a05b-aa06-4be2-a1e8-e68e02949deb
|
10103878
|
Microbiology[mh]
|
https://phys.org/news/2017‐09‐global‐microbes.html This general article describes the issues and problems associated with human activities that move microbes around the globe at an unprecedented rate.
This general article describes the issues and problems associated with human activities that move microbes around the globe at an unprecedented rate.
https://www.nature.com/articles/s41467‐017‐00110‐9 This study looked at atmospheric transport of microbes over tropical and sub‐tropical waters. The magnitude of microbial transport was estimated to be greater than 10 21 cells.
This study looked at atmospheric transport of microbes over tropical and sub‐tropical waters. The magnitude of microbial transport was estimated to be greater than 10 21 cells.
https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2011EO300001 This is a good review representing the state of knowledge on atmospheric transport of microbes as of 2011.
This is a good review representing the state of knowledge on atmospheric transport of microbes as of 2011.
https://microbewiki.kenyon.edu/index.php/Activity_and_Transport_of_Microbes_in_the_Atmosphere This is a relatively short but interesting overview of the presence and activities of microbes in the atmosphere.
This is a relatively short but interesting overview of the presence and activities of microbes in the atmosphere.
https://academic.oup.com/femsre/article/46/4/fuac009/6524182 Some studies have looked largely at the numbers or activities of microbes in the atmosphere. This review focused on the biogeography of the microbes, their dispersal and how anthropogenic activities affect atmospheric microbial communities.
Some studies have looked largely at the numbers or activities of microbes in the atmosphere. This review focused on the biogeography of the microbes, their dispersal and how anthropogenic activities affect atmospheric microbial communities.
https://journals.asm.org/doi/10.1128/Spectrum.01447‐21 This comprehensive study examined microbial communities in dust samples obtained from 33 countries distributed over 6 continents.
This comprehensive study examined microbial communities in dust samples obtained from 33 countries distributed over 6 continents.
https://researchcommons.waikato.ac.nz/bitstream/handle/10289/13245/Airblimits.pdf;jsessionid=1BD1C8DFEEC7F00E39CB39EACEA5E2B5?sequence=42 This article focused on Antarctic Dry Valleys and concluded that airborne communities of microbes are largely localized and not well‐dispersed on an inter‐continental scale.
This article focused on Antarctic Dry Valleys and concluded that airborne communities of microbes are largely localized and not well‐dispersed on an inter‐continental scale.
https://www.mdpi.com/2073‐4433/11/12/1296 While some studies have looked at microbes in the stratosphere, this report focused on microbial communities found in the troposphere.
While some studies have looked at microbes in the stratosphere, this report focused on microbial communities found in the troposphere.
https://fems‐microbiology.org/femsmicroblog‐climate‐change‐affects‐microbes/ This community science blog provides a fun look at some of the potential consequences emanating from the presence of atmospheric microbes.
This community science blog provides a fun look at some of the potential consequences emanating from the presence of atmospheric microbes.
https://www.science.org/doi/abs/10.1126/science.aao3007 This commentary emphasizes the impacts of human activities on the spread of microbes, genes, and gene functions such as antibiotic resistance.
This commentary emphasizes the impacts of human activities on the spread of microbes, genes, and gene functions such as antibiotic resistance.
https://www.hindawi.com/journals/tswj/2018/7360147/ This report described sampling for microbes on the illuminator of the International Space Station during a spacewalk. After transport to earth, DNA was amplified by PCR and sequenced. It is unclear if contamination from surface microbes might have been an issue.
This report described sampling for microbes on the illuminator of the International Space Station during a spacewalk. After transport to earth, DNA was amplified by PCR and sequenced. It is unclear if contamination from surface microbes might have been an issue.
https://medicine.wustl.edu/news/global-travelers-pick-up-numerous-genes-that-promote-microbial-resistance/ This study described analyzing the gut microbiomes of international travelers. It suggested the acquisition of genes encoded by bacteria that were endemic to the region that was travelled to. The atmospheric microbiome https://blogs.scientificamerican.com/life-unbounded/the-atmospheric-microbiome/ This short article provides a nice perspective on microbes being transported by or finding a habitat in the Earth's atmosphere.
This study described analyzing the gut microbiomes of international travelers. It suggested the acquisition of genes encoded by bacteria that were endemic to the region that was travelled to.
https://blogs.scientificamerican.com/life-unbounded/the-atmospheric-microbiome/ This short article provides a nice perspective on microbes being transported by or finding a habitat in the Earth's atmosphere.
This short article provides a nice perspective on microbes being transported by or finding a habitat in the Earth's atmosphere.
|
Cofunctioning of bacterial exometabolites drives root microbiota establishment
|
a9258ff7-ddbb-4c2e-8d84-500acc22633f
|
10104540
|
Microbiology[mh]
|
Widespread Production of Specialized Exometabolites among Root-Associated Bacteria. To investigate the prevalence of interbacterial competition mediated by secreted metabolites (hereafter referred to as exometabolites), we used a m odified B urkholder plate-based a ssay (mBA; ref. and SI Appendix , Fig. S1 A ) and tested 39,204 binary interbacterial interactions, i.e., 198 producer versus 198 target isolates. Independent validation of 7,470 randomly selected interactions (i.e., 19%) revealed 95% reproducibility of interaction phenotypes ( SI Appendix , Table S1 ). We detected an inhibition halo in 1,011 interactions (i.e., 2.6% of tested pairwise interactions involving 66% of the isolates, SI Appendix , Fig. S1 B and Table S1 ), suggesting antibiosis due to specialized exometabolites. Antagonistic interactions were detected between all bacterial classes tested, indicating that the production of exometabolites is common in the At -RSphere culture collection of commensals ( ). Actinobacteria isolates were most sensitive to all other classes, especially to Gammaproteobacteria, which showed the highest aggregated frequency and average intensity of inhibitory activities ( ). Since we observed taxonomic signals at the class level, we calculated the average halo of inhibition size for all target and producer isolates across all bacterial classes tested (see sensitivity scores in and inhibition scores in , see also SI Appendix , Fig. S1 B ). This revealed that only a few bacteria – mainly belonging to Pseudomonadaceae (R9, R68, R71, R329, R401, R562, and R569) – exhibited broad inhibition of phylogenetically diverse isolates ( ). Extensive strain-specific variation in inhibition scores across closely related bacteria was observed, suggesting large standing genetic variation for the production of bacterial exometabolites among root-derived isolates ( ). Finally, we observed a 2.7× higher inhibitory activity for root-derived bacteria compared to those originating from soil ( P = 0.011; Kruskal–Wallis followed by Dunn’s post-hoc test and Benjamini–Hochberg correction, ), suggesting that the production of exometabolites might be advantageous for bacterial root colonization. Ralstonia spp . are core members of the root microbiota of healthy Arabidopsis plants in nature ( ). We hypothesized that the bacterial root microbiota of A. thaliana contributes to preventing disease in natural environments and tested the inhibitory activities of 167 of the aforementioned bacteria against pathogenic Ralstonia solanacearum GMI1000 (hereafter referred to as Rs ; ref. and ). A subset of root- and soil-derived bacteria inhibited the growth of Rs in mBA experiments (10.9% and 10.3%, respectively). These inhibitory activities were mainly manifested by bacteria from three genera: Pseudomonas , Streptomyces, and Bacillus ( ). Genomic Capacity for Specialized Metabolite Production Explains Pronounced Inhibitory Activity. We hypothesized that the ability to produce inhibitory halos is correlated with the genome-encoded potential for the biosynthesis of specialized metabolites. We predicted BGCs for all strains tested in mBA experiments ( SI Appendix , Table S2 ) and examined whether halo producer strains encode more BGCs than nonproducers. The total number of BGCs was significantly increased in the antagonistic isolates ( P = 0.0166). BGCs, which are thought to be involved in the biosynthesis of nonribosomal peptides (NRPs), aryl polyene, and redox cofactors, were significantly enriched in halo-producing isolates compared with nonproducers ( ), suggesting that these compounds may be important for interbacterial competition. We next investigated the diversity of bacterial metabolites that could explain the observed inhibitory phenotypes. Metabolites were extracted from individual strains (n = 198) grown on the same agar medium used in mBA experiments with two organic solvents with different polarities, ethyl acetate and methanol, to capture greater chemical diversity. Liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) analysis of the resulting 396 samples yielded ~200,000 mass spectra, which were analyzed using the Global Natural Products Social Molecular Networking (GNPS) workflow ( ). Network analysis of the resulting mass spectra revealed 247 families of molecules with similar fragmentation patterns, each representing at least two structurally related analogs ( SI Appendix , Fig. S2 ). While 2,220 nodes were shared between multiple classes, the remaining 1,094 nodes were found to be produced only by individual bacterial classes. Here, Gammaproteobacteria showed the greatest number of class-specific metabolites, indicating that their pronounced inhibitory activity ( ) can be explained by the production of an enormous diversity of specialized metabolites. To determine whether bacteria secrete an increased diversity of specialized metabolites when interacting with competitors ( ), we additionally analyzed the metabolomes of 16 inhibitory zones from 10 producer isolates (R63, R71, R68, R342, R401, R562, R569, R690, R920, and R1310) with broad inhibitory activity against three target isolates (R472D3, R480, and R553) using the same UPLC-MS/MS GNPS workflow. Spectra comparison with those of individually cultured strains revealed 298 additional ions detected exclusively in inhibition zones ( ). Dereplication of the network revealed that the biosynthesis of polyketides, such as two additional congeners of the nactin antibiotics and peptidic compounds, such as cyclo(Trp-Pro), and congeners of a unique lipopeptide family, were specifically triggered by interactions with other bacterial strains, pointing to a possible role of these molecules in interbacterial competition. DAPG Contributes to the Inhibitory Activity of P. brassicacearum R401. P. brassicacearum R401 showed the greatest number of inhibitory interactions (>17× greater than average) and the largest average halo size (>10× greater than average; ). Except for Bacilli and Flavobacteria, 50 isolates belonging to all other tested bacterial classes were sensitive to the inhibitory activity of R401, particularly Actinobacteria ( ), indicating the production of exometabolites with broad spectrum inhibitory activity. antiSMASH-based analysis of a resequenced, circular R401 genome predicted 16 BGCs ( SI Appendix , Fig. S3 and Table S2 ), one of which exactly matches the phl operon of Pseudomonas protegens Pf-5 , which has been shown to encode the enzymatic machinery for the production of the Pseudomonas -specific polyketide (DAPG; ) ( ). Since no other genome from our culture collection harbors the phl operon ( SI Appendix , Fig. S3 ), this points to a role for DAPG in mediating the inhibitory activity of R401. We generated a marker-free deletion mutant of the key biosynthetic gene phlD in R401 ( ). In mBA experiments, the R401 Δ phld mutant was significantly—yet only partly—impaired in its antagonistic activity ( ), retaining 71% of its inhibitory activity toward Rs . As DAPG production was completely lacking in the Δ phld mutant ( ), this suggests that DAPG alone is insufficient to explain the full inhibitory activity of R401 toward Rs. DAPG and Pyoverdine Act Additively to Inhibit Taxonomically Distinct Root Microbiota Members. We performed a forward genetic screen to reveal additional determinants mediating the residual inhibitory activity of R401 Δ phld . First, we generated a R401 mini- Tn5 transposon library with >6,000 insertion mutants. We adopted a fluorescence-based bacterial co-culture system in liquid medium to test all the R401 insertion mutants individually for their ability to suppress growth of a GFP-expressing Rs GMI1000 strain ( Rs GMI1600; ref. ). Of the 230 candidates identified in the primary screen, 38 R401 Tn5 mutants were robustly impaired in Rs suppression after two independent rounds of validation ( and SI Appendix , Table S3 ). One of these candidate R401 mutants was also significantly impaired (on average 26.6%) in its inhibitory activity against Rs in mBA experiments on solid agar, suggesting that this mutant is impaired in the production of an inhibitory exometabolite ( SI Appendix , Table S3 ). This R401 mutant carries the Tn5 transposon insertion within the gene of a putative acyltransferase ( pvdY ) involved in the biosynthesis of the siderophore pyoverdine in Pseudomonas aeruginosa ( ). We validated the contribution of R401 pvdY to pyoverdine biosynthesis and Rs inhibition by generating an independent pvdY deletion mutant, Δ pvdy , and a R401 deletion mutant of the gene encoding NRP synthetase pvdL, located downstream of pvdY (Δ pvdl , ). In mBA experiments with Rs GMI1000, the R401 Δ pvdy strain phenocopied the R401 tn5::pvdy mutant and the mutant phenotype was complemented by the expression of pvdY under its native promoter in the R401 Δ pvdy background (Δ pvdy::pvdY ). R401 Δ pvdl exhibited a slightly weaker impairment of halo production compared to R401 Δ pvdy . It is likely that R401 PvdY is involved in hydroxyornithine acetylation—a component of R401 pyoverdine—while PvdL is involved in the initial amino acid condensation, forming the basis for the peptidic backbone of pyoverdines. Acetyl hydroxyornithine could be involved in the biosynthesis of a third, unknown exometabolite by R401, explaining the difference between both pyoverdine mutants observed in mBA experiments. We also generated a R401 double deletion strain, Δ pvdy Δ pvdl , which is impaired in its inhibitory activity against pathogenic Rs to a similar extent as the R401 Δ pvdy single mutant ( ). Using mass spectrometry, we confirmed that all generated R401 pyoverdine mutants had lost their ability to produce pyoverdine (R401 pyoverdine has the sequence: Glu-Q-Lys-AcOHOrn-Ala-Gly-Ser-Ser-OHAsp-Thr, with Q being the fluorophore moiety; ). All tested Pseudomonas isolates in the At -RSphere contain pyoverdine biosynthetic genes; however, for R9, no characteristic pyoverdine fluorescence was detected ( SI Appendix , Fig. S3 ), potentially explaining its low inhibitory activity in mBAs ( ). Therefore, we hypothesized that pyoverdines contribute more broadly to explaining the unusually high inhibitory activity of the Pseudomonadaceae detected in the mBA experiments and isolated pyoverdine mutants from another Pseudomonas root commensal in our collection. We generated a mini- Tn5 mutant library of approximately 6,000 insertion mutants of pyoverdine-producing Pseudomonas fluorescens R569 and assayed about 2,000 mutants for loss of pyoverdine-mediated fluorescence by fluorimetry ( SI Appendix , Fig. S4 A ). Characterization of the Tn5 integration sites revealed one mutant in each of the R401 pvdY and pvdL homologs ( SI Appendix , Table S3 and Fig. S4 B and C ). Unlike the partial loss of inhibitory activity of the R401 Δ pvdy and Δ pvdl single and Δ pvdy Δ pvdl double mutants against Rs , the R569 Δ pvdy and Δ pvdl single mutants both showed a complete loss of inhibitory activity to Rs in mBA experiments ( SI Appendix , Fig. S4 D ). Although the genomes of root commensals R401 and R569 are assigned to different Pseudomonas sublineages ( ) and the arrangement of genes in the pvd operon is different ( and SI Appendix , Fig. S4 C ), it is likely that the corresponding pyoverdines are directly responsible for the observed inhibitory activity. Our findings indicate that pyoverdine might either be the sole or one of the several exometabolites produced by root commensals that limit the growth of Rs . Since pyoverdines function as iron chelators, we tested whether their inhibitory activity can be explained by interbacterial competition for iron by supplementing mBA agar medium with excess ferric iron (100 µM FeCl 3 ). This resulted in significantly impaired Rs inhibition when confronted with WT R401 or undetectable inhibition when confronted with WT R569, thus phenocopying results obtained with the corresponding strain-specific pyoverdine mutants ( and SI Appendix , Fig. S4 D ). Given that the mutants also exhibit drastically reduced iron mobilization capacity compared with the corresponding WT isolates ( and SI Appendix , Fig. S4 E ), we conclude that their inhibitory activities against root commensals and pathogenic Rs are likely mediated through their iron chelator function. However, we cannot exclude a possible additional pyoverdine activity under limited iron conditions. To test whether these two classes of compounds account for the full inhibitory activity of R401, we generated two R401 double pyoverdine and DAPG mutants, Δ pvdy Δ phld and Δ pvdl Δ phld , and conducted mBA assays using Rs and a representative set of the aforementioned commensals that we previously observed to be sensitive to R401 ( SI Appendix , Table S1 ). The double pyoverdine and DAPG mutants showed severely reduced inhibitory activity against Rs, and all other tested isolates compared to the single mutants, suggesting a cumulative impact of the two metabolites. Although the general sensitivity patterns were again isolate specific, Actinobacteria were typically inhibited by DAPG alone. DAPG and pyoverdine collectively explained >70% of R401 inhibitory activity, but a residual halo of inhibition was still observed, pointing to at least a third—yet to be defined—exometabolite ( ). Beyond the known inhibitory activities of DAPG and pyoverdine, our results provide evidence that metabolites with distinct modes of action jointly act to inhibit a taxonomically broad range of bacteria. DAPG and Pyoverdine Modulate Root Microbiota Assembly and Restrict Bacterial Diversity. We conducted root microbiota reconstitution experiments with germ-free A. thaliana Col-0 in Flowpots to study the contribution of DAPG and pyoverdine to root microbiota structure and to limiting the growth of Rs ( ). We tested heat-killed (HK) or live WT R401, R401 Δ pvdy, and Δ pvdl single as well as Δ pvdy Δ phld and Δ pvdl Δ phld double mutants on a phylogenetically diverse synthetic community (SynCom) that comprises 18 bacterial isolates ( SI Appendix , Fig. S5 A and Table S1 ), in both “soil” (peat matrix) and root compartments. After 3 weeks of plant—microbe cocultivation, DNA was extracted from soil and root samples and was subjected to 16S rRNA amplicon sequencing at isolate-specific resolution. Shoot fresh weight did not differ between conditions, and no wilting was observed ( SI Appendix , Fig. S5 B and C ), indicating that Rs was unable to cause disease on A. thaliana , possibly due to the presence of the SynCom. This is consistent with very low Rs relative abundances found in root samples ( SI Appendix , Fig. S5 D ) and a previous report that direct immersion of A. thaliana roots with a very high Rs inoculum of 10 8 cells/mL was needed to induce wilting symptoms in the crucifer ( ). The addition of live, WT R401 to the SynCom significantly reduced bacterial alpha diversity (Shannon index) compared to the (HK) R401 condition in the root compartment ( P < 0.001). This major impact on the community is gradually lost when R401 exometabolite mutants were co-inoculated with the SynCom ( ). Importantly, no such effects were observed in the soil compartment ( ). Bacterial beta diversity (Bray—Curtis dissimilarity) was also drastically affected by WT R401 inoculation, explaining most of the variation and resulting in a clear separation along axis 1 (HK versus WT, P < 0.001; R2 = 0.63). All mutant samples fall in between these extremes and follow a clear trajectory, WT > single mutants > double mutants > HK, suggesting that DAPG and pyoverdine have a cumulative influence on bacterial community structure in the root compartment ( ). This is likely due to direct exometabolite activities, as the production of either metabolite did not affect plant phenotypes ( SI Appendix , Fig. S5 C ). Furthermore, in silico depletion of R401 16S rRNA sequence reads leads to similar changes in beta diversity with even higher significance levels in some cases (Δ pvdl versus WT; P = 0.002; R2 = 0.14; SI Appendix , Fig. S5 E ), indicating that, irrespective of R401 abundance, the bacterial community is altered by DAPG and pyoverdine production. While in the soil compartment, inoculation of WT R401 leads to a similar shift as in the root compartment (HK versus WT, P < 0.001; R2 = 0.66), genetic depletion of either DAPG or pyoverdine biosynthetic abilities alone or together does not alter community structure ( ), which indicates niche-specific activity of these exometabolites in the root compartment. The impact of R401 on community structure was still significant upon in silico depletion of R401 16S rRNA sequences (HK versus WT , P < 0.001; R2 = 0.35; SI Appendix , Fig. S5 F ), indicating that R401 uses other, unknown mechanisms to influence community structure in soil. To determine whether the mBA results obtained in vitro have physiological relevance in planta , we inspected relative isolate abundances across conditions, reasoning that isolates insensitive to at least one of the R401 exometabolites in vitro would not benefit from genetic disruption of either BGC in a community context in the root compartment. This analysis revealed that only DAPG- and/or pyoverdine-sensitive isolates benefited from disruption of the corresponding BGCs present in R401 ( ), suggesting that binary interaction data in vitro can explain the impact on individual commensal isolates in a community context in the root compartment. This observation prompted us to test whether mBA data could predict community-scale effects as well as effects on single isolates. We computed the average reduction in inhibitory activity for each R401 mutant ( ) and tested whether the lack of competitiveness could inform changes in alpha- and beta-diversity. Linear regression analyses revealed that binary interaction data largely explained the observed effect size in the root compartment for both alpha- and beta-diversity indices but had no predictive power in the soil compartment ( , respectively). In conclusion, two exometabolites produced by a single isolate have large effects on key ecological indices of a taxonomically diverse SynCom and can be linked to pairwise in vitro interaction experiments. DAPG and Pyoverdine Act as Root Competence Determinants in a Community Context. To examine whether R401-induced modulation of bacterial assembly and diversity promoted R401 competitiveness, we determined the relative abundance of R401 in SynCom samples collected from root and soil compartments. In both root and soil samples, the 16S rRNA reads of live R401 by far exceeded the barely detectable HK R401 reads, indicating that live R401 inoculum proliferates in both compartments under all conditions ( ). However, R401 accumulated in association with roots at >2x higher abundance than in the soil compartment. R401 accumulation in the root compartment was gradually reduced in the SynCom context: Single or double mutations of DAPG and pyoverdine biosynthetic genes were sufficient to reduce the R401 abundance by up to approximately 39% compared to WT R401 ( ). Importantly, the capacity of these deletion mutants to colonize the soil compartment remained unaffected ( ). We also investigated the root colonization capacity of WT and all the five R401 deletion strains in mono-associations on axenic A. thaliana plants in an agar-based system ( ) and found no significant differences in live R401 cell counts ( ). Neither the growth of R401 DAPG or pyoverdine mutants was impaired in axenic culture media ( SI Appendix , Fig. S6 B and C ). Similarly, colonization experiments with WT R401 or the Δ pvdy Δ phld double mutant in the Flowpot system revealed no difference in root colonization capacity in the absence of bacterial competitors ( SI Appendix , Fig. S6 A ). Thus, DAPG and pyoverdine cofunction as R401 root competence determinants specifically in competition with other members of this SynCom in the root compartment. To assess the prevalence of Pseudomonas strains capable of producing DAPG and pyoverdine in association with plants when grown in natural soils, we analyzed the genomes of several commensal culture collections of 1,567 isolates from roots or leaves of A. thaliana or roots of the legume Lotus japonicus ( , , , ). These plants had been grown in the same or different soils on different continents, and our genome analysis revealed increased abundance of DAPG and pyoverdine BGCs in root-derived Pseudomonas isolates ( SI Appendix , Fig. S6 D and E ). These results not only support our conclusions obtained with a defined core commensal community and gnotobiotic A. thaliana but also suggest a broader yet root-specific role of these two exometabolites in natural environments beyond the model crucifer.
To investigate the prevalence of interbacterial competition mediated by secreted metabolites (hereafter referred to as exometabolites), we used a m odified B urkholder plate-based a ssay (mBA; ref. and SI Appendix , Fig. S1 A ) and tested 39,204 binary interbacterial interactions, i.e., 198 producer versus 198 target isolates. Independent validation of 7,470 randomly selected interactions (i.e., 19%) revealed 95% reproducibility of interaction phenotypes ( SI Appendix , Table S1 ). We detected an inhibition halo in 1,011 interactions (i.e., 2.6% of tested pairwise interactions involving 66% of the isolates, SI Appendix , Fig. S1 B and Table S1 ), suggesting antibiosis due to specialized exometabolites. Antagonistic interactions were detected between all bacterial classes tested, indicating that the production of exometabolites is common in the At -RSphere culture collection of commensals ( ). Actinobacteria isolates were most sensitive to all other classes, especially to Gammaproteobacteria, which showed the highest aggregated frequency and average intensity of inhibitory activities ( ). Since we observed taxonomic signals at the class level, we calculated the average halo of inhibition size for all target and producer isolates across all bacterial classes tested (see sensitivity scores in and inhibition scores in , see also SI Appendix , Fig. S1 B ). This revealed that only a few bacteria – mainly belonging to Pseudomonadaceae (R9, R68, R71, R329, R401, R562, and R569) – exhibited broad inhibition of phylogenetically diverse isolates ( ). Extensive strain-specific variation in inhibition scores across closely related bacteria was observed, suggesting large standing genetic variation for the production of bacterial exometabolites among root-derived isolates ( ). Finally, we observed a 2.7× higher inhibitory activity for root-derived bacteria compared to those originating from soil ( P = 0.011; Kruskal–Wallis followed by Dunn’s post-hoc test and Benjamini–Hochberg correction, ), suggesting that the production of exometabolites might be advantageous for bacterial root colonization. Ralstonia spp . are core members of the root microbiota of healthy Arabidopsis plants in nature ( ). We hypothesized that the bacterial root microbiota of A. thaliana contributes to preventing disease in natural environments and tested the inhibitory activities of 167 of the aforementioned bacteria against pathogenic Ralstonia solanacearum GMI1000 (hereafter referred to as Rs ; ref. and ). A subset of root- and soil-derived bacteria inhibited the growth of Rs in mBA experiments (10.9% and 10.3%, respectively). These inhibitory activities were mainly manifested by bacteria from three genera: Pseudomonas , Streptomyces, and Bacillus ( ).
We hypothesized that the ability to produce inhibitory halos is correlated with the genome-encoded potential for the biosynthesis of specialized metabolites. We predicted BGCs for all strains tested in mBA experiments ( SI Appendix , Table S2 ) and examined whether halo producer strains encode more BGCs than nonproducers. The total number of BGCs was significantly increased in the antagonistic isolates ( P = 0.0166). BGCs, which are thought to be involved in the biosynthesis of nonribosomal peptides (NRPs), aryl polyene, and redox cofactors, were significantly enriched in halo-producing isolates compared with nonproducers ( ), suggesting that these compounds may be important for interbacterial competition. We next investigated the diversity of bacterial metabolites that could explain the observed inhibitory phenotypes. Metabolites were extracted from individual strains (n = 198) grown on the same agar medium used in mBA experiments with two organic solvents with different polarities, ethyl acetate and methanol, to capture greater chemical diversity. Liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) analysis of the resulting 396 samples yielded ~200,000 mass spectra, which were analyzed using the Global Natural Products Social Molecular Networking (GNPS) workflow ( ). Network analysis of the resulting mass spectra revealed 247 families of molecules with similar fragmentation patterns, each representing at least two structurally related analogs ( SI Appendix , Fig. S2 ). While 2,220 nodes were shared between multiple classes, the remaining 1,094 nodes were found to be produced only by individual bacterial classes. Here, Gammaproteobacteria showed the greatest number of class-specific metabolites, indicating that their pronounced inhibitory activity ( ) can be explained by the production of an enormous diversity of specialized metabolites. To determine whether bacteria secrete an increased diversity of specialized metabolites when interacting with competitors ( ), we additionally analyzed the metabolomes of 16 inhibitory zones from 10 producer isolates (R63, R71, R68, R342, R401, R562, R569, R690, R920, and R1310) with broad inhibitory activity against three target isolates (R472D3, R480, and R553) using the same UPLC-MS/MS GNPS workflow. Spectra comparison with those of individually cultured strains revealed 298 additional ions detected exclusively in inhibition zones ( ). Dereplication of the network revealed that the biosynthesis of polyketides, such as two additional congeners of the nactin antibiotics and peptidic compounds, such as cyclo(Trp-Pro), and congeners of a unique lipopeptide family, were specifically triggered by interactions with other bacterial strains, pointing to a possible role of these molecules in interbacterial competition.
P. brassicacearum R401. P. brassicacearum R401 showed the greatest number of inhibitory interactions (>17× greater than average) and the largest average halo size (>10× greater than average; ). Except for Bacilli and Flavobacteria, 50 isolates belonging to all other tested bacterial classes were sensitive to the inhibitory activity of R401, particularly Actinobacteria ( ), indicating the production of exometabolites with broad spectrum inhibitory activity. antiSMASH-based analysis of a resequenced, circular R401 genome predicted 16 BGCs ( SI Appendix , Fig. S3 and Table S2 ), one of which exactly matches the phl operon of Pseudomonas protegens Pf-5 , which has been shown to encode the enzymatic machinery for the production of the Pseudomonas -specific polyketide (DAPG; ) ( ). Since no other genome from our culture collection harbors the phl operon ( SI Appendix , Fig. S3 ), this points to a role for DAPG in mediating the inhibitory activity of R401. We generated a marker-free deletion mutant of the key biosynthetic gene phlD in R401 ( ). In mBA experiments, the R401 Δ phld mutant was significantly—yet only partly—impaired in its antagonistic activity ( ), retaining 71% of its inhibitory activity toward Rs . As DAPG production was completely lacking in the Δ phld mutant ( ), this suggests that DAPG alone is insufficient to explain the full inhibitory activity of R401 toward Rs.
We performed a forward genetic screen to reveal additional determinants mediating the residual inhibitory activity of R401 Δ phld . First, we generated a R401 mini- Tn5 transposon library with >6,000 insertion mutants. We adopted a fluorescence-based bacterial co-culture system in liquid medium to test all the R401 insertion mutants individually for their ability to suppress growth of a GFP-expressing Rs GMI1000 strain ( Rs GMI1600; ref. ). Of the 230 candidates identified in the primary screen, 38 R401 Tn5 mutants were robustly impaired in Rs suppression after two independent rounds of validation ( and SI Appendix , Table S3 ). One of these candidate R401 mutants was also significantly impaired (on average 26.6%) in its inhibitory activity against Rs in mBA experiments on solid agar, suggesting that this mutant is impaired in the production of an inhibitory exometabolite ( SI Appendix , Table S3 ). This R401 mutant carries the Tn5 transposon insertion within the gene of a putative acyltransferase ( pvdY ) involved in the biosynthesis of the siderophore pyoverdine in Pseudomonas aeruginosa ( ). We validated the contribution of R401 pvdY to pyoverdine biosynthesis and Rs inhibition by generating an independent pvdY deletion mutant, Δ pvdy , and a R401 deletion mutant of the gene encoding NRP synthetase pvdL, located downstream of pvdY (Δ pvdl , ). In mBA experiments with Rs GMI1000, the R401 Δ pvdy strain phenocopied the R401 tn5::pvdy mutant and the mutant phenotype was complemented by the expression of pvdY under its native promoter in the R401 Δ pvdy background (Δ pvdy::pvdY ). R401 Δ pvdl exhibited a slightly weaker impairment of halo production compared to R401 Δ pvdy . It is likely that R401 PvdY is involved in hydroxyornithine acetylation—a component of R401 pyoverdine—while PvdL is involved in the initial amino acid condensation, forming the basis for the peptidic backbone of pyoverdines. Acetyl hydroxyornithine could be involved in the biosynthesis of a third, unknown exometabolite by R401, explaining the difference between both pyoverdine mutants observed in mBA experiments. We also generated a R401 double deletion strain, Δ pvdy Δ pvdl , which is impaired in its inhibitory activity against pathogenic Rs to a similar extent as the R401 Δ pvdy single mutant ( ). Using mass spectrometry, we confirmed that all generated R401 pyoverdine mutants had lost their ability to produce pyoverdine (R401 pyoverdine has the sequence: Glu-Q-Lys-AcOHOrn-Ala-Gly-Ser-Ser-OHAsp-Thr, with Q being the fluorophore moiety; ). All tested Pseudomonas isolates in the At -RSphere contain pyoverdine biosynthetic genes; however, for R9, no characteristic pyoverdine fluorescence was detected ( SI Appendix , Fig. S3 ), potentially explaining its low inhibitory activity in mBAs ( ). Therefore, we hypothesized that pyoverdines contribute more broadly to explaining the unusually high inhibitory activity of the Pseudomonadaceae detected in the mBA experiments and isolated pyoverdine mutants from another Pseudomonas root commensal in our collection. We generated a mini- Tn5 mutant library of approximately 6,000 insertion mutants of pyoverdine-producing Pseudomonas fluorescens R569 and assayed about 2,000 mutants for loss of pyoverdine-mediated fluorescence by fluorimetry ( SI Appendix , Fig. S4 A ). Characterization of the Tn5 integration sites revealed one mutant in each of the R401 pvdY and pvdL homologs ( SI Appendix , Table S3 and Fig. S4 B and C ). Unlike the partial loss of inhibitory activity of the R401 Δ pvdy and Δ pvdl single and Δ pvdy Δ pvdl double mutants against Rs , the R569 Δ pvdy and Δ pvdl single mutants both showed a complete loss of inhibitory activity to Rs in mBA experiments ( SI Appendix , Fig. S4 D ). Although the genomes of root commensals R401 and R569 are assigned to different Pseudomonas sublineages ( ) and the arrangement of genes in the pvd operon is different ( and SI Appendix , Fig. S4 C ), it is likely that the corresponding pyoverdines are directly responsible for the observed inhibitory activity. Our findings indicate that pyoverdine might either be the sole or one of the several exometabolites produced by root commensals that limit the growth of Rs . Since pyoverdines function as iron chelators, we tested whether their inhibitory activity can be explained by interbacterial competition for iron by supplementing mBA agar medium with excess ferric iron (100 µM FeCl 3 ). This resulted in significantly impaired Rs inhibition when confronted with WT R401 or undetectable inhibition when confronted with WT R569, thus phenocopying results obtained with the corresponding strain-specific pyoverdine mutants ( and SI Appendix , Fig. S4 D ). Given that the mutants also exhibit drastically reduced iron mobilization capacity compared with the corresponding WT isolates ( and SI Appendix , Fig. S4 E ), we conclude that their inhibitory activities against root commensals and pathogenic Rs are likely mediated through their iron chelator function. However, we cannot exclude a possible additional pyoverdine activity under limited iron conditions. To test whether these two classes of compounds account for the full inhibitory activity of R401, we generated two R401 double pyoverdine and DAPG mutants, Δ pvdy Δ phld and Δ pvdl Δ phld , and conducted mBA assays using Rs and a representative set of the aforementioned commensals that we previously observed to be sensitive to R401 ( SI Appendix , Table S1 ). The double pyoverdine and DAPG mutants showed severely reduced inhibitory activity against Rs, and all other tested isolates compared to the single mutants, suggesting a cumulative impact of the two metabolites. Although the general sensitivity patterns were again isolate specific, Actinobacteria were typically inhibited by DAPG alone. DAPG and pyoverdine collectively explained >70% of R401 inhibitory activity, but a residual halo of inhibition was still observed, pointing to at least a third—yet to be defined—exometabolite ( ). Beyond the known inhibitory activities of DAPG and pyoverdine, our results provide evidence that metabolites with distinct modes of action jointly act to inhibit a taxonomically broad range of bacteria.
We conducted root microbiota reconstitution experiments with germ-free A. thaliana Col-0 in Flowpots to study the contribution of DAPG and pyoverdine to root microbiota structure and to limiting the growth of Rs ( ). We tested heat-killed (HK) or live WT R401, R401 Δ pvdy, and Δ pvdl single as well as Δ pvdy Δ phld and Δ pvdl Δ phld double mutants on a phylogenetically diverse synthetic community (SynCom) that comprises 18 bacterial isolates ( SI Appendix , Fig. S5 A and Table S1 ), in both “soil” (peat matrix) and root compartments. After 3 weeks of plant—microbe cocultivation, DNA was extracted from soil and root samples and was subjected to 16S rRNA amplicon sequencing at isolate-specific resolution. Shoot fresh weight did not differ between conditions, and no wilting was observed ( SI Appendix , Fig. S5 B and C ), indicating that Rs was unable to cause disease on A. thaliana , possibly due to the presence of the SynCom. This is consistent with very low Rs relative abundances found in root samples ( SI Appendix , Fig. S5 D ) and a previous report that direct immersion of A. thaliana roots with a very high Rs inoculum of 10 8 cells/mL was needed to induce wilting symptoms in the crucifer ( ). The addition of live, WT R401 to the SynCom significantly reduced bacterial alpha diversity (Shannon index) compared to the (HK) R401 condition in the root compartment ( P < 0.001). This major impact on the community is gradually lost when R401 exometabolite mutants were co-inoculated with the SynCom ( ). Importantly, no such effects were observed in the soil compartment ( ). Bacterial beta diversity (Bray—Curtis dissimilarity) was also drastically affected by WT R401 inoculation, explaining most of the variation and resulting in a clear separation along axis 1 (HK versus WT, P < 0.001; R2 = 0.63). All mutant samples fall in between these extremes and follow a clear trajectory, WT > single mutants > double mutants > HK, suggesting that DAPG and pyoverdine have a cumulative influence on bacterial community structure in the root compartment ( ). This is likely due to direct exometabolite activities, as the production of either metabolite did not affect plant phenotypes ( SI Appendix , Fig. S5 C ). Furthermore, in silico depletion of R401 16S rRNA sequence reads leads to similar changes in beta diversity with even higher significance levels in some cases (Δ pvdl versus WT; P = 0.002; R2 = 0.14; SI Appendix , Fig. S5 E ), indicating that, irrespective of R401 abundance, the bacterial community is altered by DAPG and pyoverdine production. While in the soil compartment, inoculation of WT R401 leads to a similar shift as in the root compartment (HK versus WT, P < 0.001; R2 = 0.66), genetic depletion of either DAPG or pyoverdine biosynthetic abilities alone or together does not alter community structure ( ), which indicates niche-specific activity of these exometabolites in the root compartment. The impact of R401 on community structure was still significant upon in silico depletion of R401 16S rRNA sequences (HK versus WT , P < 0.001; R2 = 0.35; SI Appendix , Fig. S5 F ), indicating that R401 uses other, unknown mechanisms to influence community structure in soil. To determine whether the mBA results obtained in vitro have physiological relevance in planta , we inspected relative isolate abundances across conditions, reasoning that isolates insensitive to at least one of the R401 exometabolites in vitro would not benefit from genetic disruption of either BGC in a community context in the root compartment. This analysis revealed that only DAPG- and/or pyoverdine-sensitive isolates benefited from disruption of the corresponding BGCs present in R401 ( ), suggesting that binary interaction data in vitro can explain the impact on individual commensal isolates in a community context in the root compartment. This observation prompted us to test whether mBA data could predict community-scale effects as well as effects on single isolates. We computed the average reduction in inhibitory activity for each R401 mutant ( ) and tested whether the lack of competitiveness could inform changes in alpha- and beta-diversity. Linear regression analyses revealed that binary interaction data largely explained the observed effect size in the root compartment for both alpha- and beta-diversity indices but had no predictive power in the soil compartment ( , respectively). In conclusion, two exometabolites produced by a single isolate have large effects on key ecological indices of a taxonomically diverse SynCom and can be linked to pairwise in vitro interaction experiments.
To examine whether R401-induced modulation of bacterial assembly and diversity promoted R401 competitiveness, we determined the relative abundance of R401 in SynCom samples collected from root and soil compartments. In both root and soil samples, the 16S rRNA reads of live R401 by far exceeded the barely detectable HK R401 reads, indicating that live R401 inoculum proliferates in both compartments under all conditions ( ). However, R401 accumulated in association with roots at >2x higher abundance than in the soil compartment. R401 accumulation in the root compartment was gradually reduced in the SynCom context: Single or double mutations of DAPG and pyoverdine biosynthetic genes were sufficient to reduce the R401 abundance by up to approximately 39% compared to WT R401 ( ). Importantly, the capacity of these deletion mutants to colonize the soil compartment remained unaffected ( ). We also investigated the root colonization capacity of WT and all the five R401 deletion strains in mono-associations on axenic A. thaliana plants in an agar-based system ( ) and found no significant differences in live R401 cell counts ( ). Neither the growth of R401 DAPG or pyoverdine mutants was impaired in axenic culture media ( SI Appendix , Fig. S6 B and C ). Similarly, colonization experiments with WT R401 or the Δ pvdy Δ phld double mutant in the Flowpot system revealed no difference in root colonization capacity in the absence of bacterial competitors ( SI Appendix , Fig. S6 A ). Thus, DAPG and pyoverdine cofunction as R401 root competence determinants specifically in competition with other members of this SynCom in the root compartment. To assess the prevalence of Pseudomonas strains capable of producing DAPG and pyoverdine in association with plants when grown in natural soils, we analyzed the genomes of several commensal culture collections of 1,567 isolates from roots or leaves of A. thaliana or roots of the legume Lotus japonicus ( , , , ). These plants had been grown in the same or different soils on different continents, and our genome analysis revealed increased abundance of DAPG and pyoverdine BGCs in root-derived Pseudomonas isolates ( SI Appendix , Fig. S6 D and E ). These results not only support our conclusions obtained with a defined core commensal community and gnotobiotic A. thaliana but also suggest a broader yet root-specific role of these two exometabolites in natural environments beyond the model crucifer.
Our identification of a subset of specialized metabolites, primarily polyketides, and NRPs, that are specifically produced upon competition sensing, suggests that interbacterial competition can activate cryptic BGCs in the A. thaliana root microbiota. A related study using an A. thaliana phyllosphere bacterial culture collection also demonstrated widespread production of specialized metabolites and identified 725 binary inhibitory interactions in vitro, with only 3.6% of 224 strains mediating most of these interactions ( ). The A. thaliana leaf and root microbiota overlap extensively at higher taxonomic ranks, and Pseudomonas species are core members of both communities ( ). However, the comparison between root- and leaf-derived Pseudomonas genomes collected from natural environments reveals complete or partial niche specificity, respectively, for Pseudomonas production of DAPG and pyoverdine in the root compartment ( SI Appendix , Fig. S6 D ). Furthermore, the total number of BGCs is significantly higher (approximately 1.6×) in root-derived Pseudomonas isolates ( SI Appendix , Fig. S6 E ). Likewise, the number of antagonistic interbacterial interactions within the root microbiota significantly exceeds those between abundant soil-derived bacteria ( ). These findings could be explained by the fact that natural, unplanted soils and leaves represent oligotrophic habitats ( , ), whereas the rhizoplane is a microenvironment with a continuous supply of root exudation-derived nonstructural carbohydrates to support bacterial proliferation ( ). It is likely that the large number of antagonistic interactions we found between root microbiota members in vitro is also influenced by the agar medium, which is rich in nonstructural organic carbon to mimic the nutrient-rich rhizoplane, thereby inducing the costly biosynthesis of exometabolites. We provided genetic evidence for a root niche-specific cofunction of the exometabolites DAPG and pyoverdine as root competence determinants of commensal R401 that scales at the community level by influencing alpha- and beta-diversity indices. R401 also proliferates in the soil compartment, but its growth there and its influence on the structure of the bacterial soil community are independent of these exometabolites ( and ). These observations, together with the fact that plant roots are a major sink for the uptake of rhizospheric mineral iron ( – ), imply that the public good iron becomes rate limiting in the root compartment. This could explain why the high-affinity iron chelator pyoverdine and antimicrobial DAPG cofunction and maximize R401 growth at the expense of its commensal competitors in the root niche, despite their different modes of action. Consistent with this model, the expression of core biosynthetic genes involved in the production of both DAPG and pyoverdine is induced under iron-limiting conditions in rhizospheric P. fluorescens and human pathogenic P. aeruginosa PAO1 ( , ). The indistinguishable root colonization capacity of WT R401, pyoverdine, or DAPG single, or double mutants in mono-associations with A. thaliana is also consistent with our model and further supports the specific role of these exometabolites in interspecies competition during root microbiota establishment. Likewise, P. fluorescens C7R12 pyoverdine mutants were unaffected in rhizosphere competence under axenic conditions ( ). All the seven A. thaliana root-derived Pseudomonas spp. strains in the At -RSphere culture collection have the genetic capacity to produce pyoverdines ( SI Appendix , Fig. S3 ). Together with the essential function of pyoverdines produced by multiple Pseudomonas root commensals in limiting the growth of pathogenic Rs ( , SI Appendix , Fig. S4 D , and ref. ), this suggests a widespread role for these siderophores in determining Pseudomonas competitiveness in the root microbiota, even in soils that contain replete bioavailable inorganic iron for plant growth ( , , , ). Thus, it is possible that in natural soils, regulated secretion of pyoverdines is an adaptive trait of the genus Pseudomonas to simultaneously compete against the root iron sink and enable pervasiveness of the Pseudomonadaceae lineage via interbacterial competition ( , , ). Our genetic data show that a sequential reduction in the diversity of specialized exometabolites in one species is sufficient to increase alpha-diversity indices and significantly alter the beta diversity of a synthetic root microbiota, thereby establishing a causal link between within-species genetic diversity and interspecies diversity changes. This suggests that in the root microenvironment, DAPG and pyoverdine produced by WT R401 cofunction to locally inhibit the growth of multiple SynCom members, which in turn increases the abundance of the producer R401. Our results obtained with the SynCom in coculture with gnotobiotic A. thaliana grown in peat matrix bear striking similarity to the changes in alpha diversity reported for this plant when grown in natural soils, where bacterial alpha diversity in bulk soil, rhizosphere, and root compartments gradually decreases toward the root ( , , ). This suggests that exometabolite-mediated antagonistic interactions of commensals underpin at least part of the reduction of alpha diversity consistently observed in natural environments at the soil—root interface. Given the prevalence of operons predicted to encode pyoverdines among root-derived Pseudomonas spp. isolates in the At -RSphere culture collection (ref. and SI Appendix , Fig. S3 ), we consider the moderately complex 18-member SynCom employed here as necessary to overcome genetic redundancy, at least for pyoverdine production and probably also for the DAPG exometabolite in the root microbiota, when A. thaliana plants are grown in natural soils ( SI Appendix , Table S2 and refs. , ). It remains to be tested whether genetic depletion of individual bacterial exometabolites can have similarly striking effects on highly complex natural microbial communities, as the assessment of microbial antagonism in vitro is often not transferable to field conditions ( – ). However, DAPG- and phenazine-overproducing mutant strains of Pseudomonas putida WCS358r were shown to differentially shape root-associated fungal communities of field-grown wheat over an experimental period of up to 139 d compared to the WCS358r WT strain ( , ). This suggests that bacterial exometabolites—such as DAPG—can have long-lasting effects on the structure of complex microbial communities. The approach described here to remove genetic redundancy by using annotated genomes of cultured microbiota members may be more generally applicable to explore community functions of other microbial genetic determinants with SynComs in gnotobiotic plant growth systems. Taken together, our study suggests that high-throughput binary interaction experiments, combined with genome mining for BGCs of root microbiota culture collections, can be applied to identify strains with broad-spectrum antagonistic activities that are likely robust root colonizers. This might have relevance for future interventions in the root microbiota with rational biologicals that confer beneficial traits on the host, including indirect pathogen protection and mineral nutrition.
Detailed descriptions of all utilized methods and data analysis workflows can be found in SI Appendix . Screen for Antagonistic Interbacterial Interactions. For all mBA experiments, bacterial strains were cultured axenically. Target strains were embedded in molten 25% tryptic soy agar (TSA) and producer strains were then dropped out on top. After up to 96 h of cultivation, pictures were taken and halos were quantified. Metabolomic Analyses. Untargeted metabolomic analysis was either conducted with axenically grown strains or on the mBA inhibition zones of ten strains (R63, R68, R71, R342, R401, R562, R569, R690, R920, and R1310), each tested against three target strains (R472D3, R480, and R553). Detection of R401 DAPG and Pyoverdine. Using metabolite analyses, R401 DAPG and pyoverdine were detected in WT extracts and lack of metabolite production was confirmed in the respective mutants. BGC Prediction Using antiSMASH. antiSMASH 6.0 ( ) was used to predict BGCs for all the tested strains. Mutant Generation. Using homologous recombination, marker-free targeted R401 knockout mutants were generated lacking DAPG and/or pyoverdine biosynthetic genes. Establishment of mini- Tn5 Transposon Mutant Collections in R401 and R569. R401 and R569 were transformed with pUTmTn5Km2 ( ), carrying a mini- Tn5 transposon, and individual colonies were then picked into 96-well plates. mini- Tn5 Transposon Mutant Screen for Loss of R401-Mediated Growth Inhibition of Rs GMI1600. R401 mini- Tn5 transposon mutants were evaluated in parallel for their WT-like growth and loss of inhibitory activity against GFP-expressing Rs GMI1600. mini- Tn5 Transposon Mutant Screen for Lack of Pyoverdine Fluorescence of R569. Axenically grown R569 mini- Tn5 transposon mutants were evaluated for their lack of fluorescence characteristic of pyoverdine at λ excitation = 395 nm and λ emission = 470 nm. Identification of mini- Tn5 Transposon Integration Sites in the Genomes of R401 and R569. The chromosomal mini- Tn5 transposon integration sites in the R401 or R569 genomes were determined similarly as described before ( ). Complementation of R401 Δ pvdy . Complementation of R401 Δ pvdy was conducted by expressing the coding region of R401 pvdY with its native 5′ regulatory sequences from the low-copy pSEVA22l plasmid ( ) in the Δ pvdy mutant background. In vitro Iron Mobilization Assay. The capability of R401 and R569 mutants to solubilize inaccessible ferric iron was tested using a previously described photometric assay ( ). Validation of Bacterial Growth Rates. The growth of each R401 mutant and wild type was assessed by continuously measuring the OD 600 of actively growing bacterial cultures. Microbiota Reconstitution in the Gnotobiotic Flowpot System. Flowpots were assembled according to ref. with minor modifications. A final bacterial OD 600 of 0.0025 was inoculated in ½ MS, and five surface-sterilized A. thaliana Col-0 seeds per Flowpot were sown. After 21 dpi, the fresh weight of shoots was determined, and the roots and peat matrix were harvested for bacterial community profiling. Monoassociation Experiment of R401 on A. thaliana Seedlings. Colony-forming units of WT R401 or its mutants colonizing A. thaliana seedlings grown in ½ MS agar were determined as described ( ). DNA Isolation. DNA was isolated from A. thaliana roots and Flowpot peat using a modified high-throughput version of the FastDNA SPIN kit for Soil (MP Biomedicals). Library Preparation for Bacterial 16S rRNA Gene Profiling. The v5v7 variable regions of the bacterial 16S rRNA gene were amplified from DNA template derived from roots and Flowpot peat. The resulting amplicons were tagged with sample-specific barcodes using a dual-indexing approach. Illumina paired-end sequencing was then performed in-house with the MiSeq benchtop sequencer.
For all mBA experiments, bacterial strains were cultured axenically. Target strains were embedded in molten 25% tryptic soy agar (TSA) and producer strains were then dropped out on top. After up to 96 h of cultivation, pictures were taken and halos were quantified.
Untargeted metabolomic analysis was either conducted with axenically grown strains or on the mBA inhibition zones of ten strains (R63, R68, R71, R342, R401, R562, R569, R690, R920, and R1310), each tested against three target strains (R472D3, R480, and R553).
Using metabolite analyses, R401 DAPG and pyoverdine were detected in WT extracts and lack of metabolite production was confirmed in the respective mutants.
antiSMASH 6.0 ( ) was used to predict BGCs for all the tested strains.
Using homologous recombination, marker-free targeted R401 knockout mutants were generated lacking DAPG and/or pyoverdine biosynthetic genes.
Tn5 Transposon Mutant Collections in R401 and R569. R401 and R569 were transformed with pUTmTn5Km2 ( ), carrying a mini- Tn5 transposon, and individual colonies were then picked into 96-well plates.
Tn5 Transposon Mutant Screen for Loss of R401-Mediated Growth Inhibition of Rs GMI1600. R401 mini- Tn5 transposon mutants were evaluated in parallel for their WT-like growth and loss of inhibitory activity against GFP-expressing Rs GMI1600.
Tn5 Transposon Mutant Screen for Lack of Pyoverdine Fluorescence of R569. Axenically grown R569 mini- Tn5 transposon mutants were evaluated for their lack of fluorescence characteristic of pyoverdine at λ excitation = 395 nm and λ emission = 470 nm.
Tn5 Transposon Integration Sites in the Genomes of R401 and R569. The chromosomal mini- Tn5 transposon integration sites in the R401 or R569 genomes were determined similarly as described before ( ).
pvdy . Complementation of R401 Δ pvdy was conducted by expressing the coding region of R401 pvdY with its native 5′ regulatory sequences from the low-copy pSEVA22l plasmid ( ) in the Δ pvdy mutant background.
The capability of R401 and R569 mutants to solubilize inaccessible ferric iron was tested using a previously described photometric assay ( ).
The growth of each R401 mutant and wild type was assessed by continuously measuring the OD 600 of actively growing bacterial cultures.
Flowpots were assembled according to ref. with minor modifications. A final bacterial OD 600 of 0.0025 was inoculated in ½ MS, and five surface-sterilized A. thaliana Col-0 seeds per Flowpot were sown. After 21 dpi, the fresh weight of shoots was determined, and the roots and peat matrix were harvested for bacterial community profiling.
thaliana Seedlings. Colony-forming units of WT R401 or its mutants colonizing A. thaliana seedlings grown in ½ MS agar were determined as described ( ).
DNA was isolated from A. thaliana roots and Flowpot peat using a modified high-throughput version of the FastDNA SPIN kit for Soil (MP Biomedicals).
16S rRNA Gene Profiling. The v5v7 variable regions of the bacterial 16S rRNA gene were amplified from DNA template derived from roots and Flowpot peat. The resulting amplicons were tagged with sample-specific barcodes using a dual-indexing approach. Illumina paired-end sequencing was then performed in-house with the MiSeq benchtop sequencer.
Appendix 01 (PDF) Click here for additional data file.
|
The Psychosocial Impact of Treating Patients with COVID-19 on Psychiatry Residents in a Community Hospital: a Qualitative Study
|
2ccd67fc-883f-4ab0-afc4-8d84c8e660a1
|
10104691
|
Internal Medicine[mh]
|
Study Design A phenomenological approach which seeks to understand how individuals perceive their “lived” experience was utilized in this study. This observational, qualitative study consisted of the informed consent process followed by structured, individual, 45-min phone interviews of psychiatry house staff deployed to provide medical care to COVID-19 patients on internal medicine wards during the height of the pandemic from June 2020 to December 2020 . The study was approved by the Nassau Health Systems ethics committee of Nassau University Medical Center. The criterion sampling method was applied to recruit participants, as the method aids in seeking individuals who share common experiences but differ in individual characteristics . All psychiatry residents and fellows within the community hospital, deployed to care for COVID-19 patients for 1 week or more, were considered eligible to participate in the study. The residents ( n = 37) received three emails inviting them to participate. Sixteen agreed to be interviewed. Demographic characteristics of participants are shown in Table . The authors developed an interview guide consisting of predetermined, open-ended questions based on a prior literature search on the impact of pandemics on frontline workers . Demographic data was collected via a short paper survey. All the authors received training in qualitative interviewing techniques before the start of the study. The interviews were recorded then transcribed verbatim utilizing EVISTR Digital Voice Recorder and Microsoft Office. In order to safeguard the data, the authors assigned a unique identifying code to each interview, de-identified the transcripts, and used password-protected recording devices. Interview sessions continued until theoretical saturation was achieved and no new themes were derived. Data Analysis A codebook was developed prior to the coding process, with questions from the interview guide serving as the first set of twelve deductive codes. Subsequently, two additional iterative codes were generated after the initial review of the transcribed interviews. After each transcript was individually reviewed, they were collectively coded and themes were derived from each transcript using thematic analysis with aid of the Dedoose software. Redundant codes within umbrella codes emerged, revealing commonalities and recurring themes across the interviews. Data analysis occurred concurrently with interviews, which served as a guide for data saturation. After the interview was completed, findings were shared with participants who further verified the accuracy of the results. The coding framework used is shown in Fig. .
A phenomenological approach which seeks to understand how individuals perceive their “lived” experience was utilized in this study. This observational, qualitative study consisted of the informed consent process followed by structured, individual, 45-min phone interviews of psychiatry house staff deployed to provide medical care to COVID-19 patients on internal medicine wards during the height of the pandemic from June 2020 to December 2020 . The study was approved by the Nassau Health Systems ethics committee of Nassau University Medical Center. The criterion sampling method was applied to recruit participants, as the method aids in seeking individuals who share common experiences but differ in individual characteristics . All psychiatry residents and fellows within the community hospital, deployed to care for COVID-19 patients for 1 week or more, were considered eligible to participate in the study. The residents ( n = 37) received three emails inviting them to participate. Sixteen agreed to be interviewed. Demographic characteristics of participants are shown in Table . The authors developed an interview guide consisting of predetermined, open-ended questions based on a prior literature search on the impact of pandemics on frontline workers . Demographic data was collected via a short paper survey. All the authors received training in qualitative interviewing techniques before the start of the study. The interviews were recorded then transcribed verbatim utilizing EVISTR Digital Voice Recorder and Microsoft Office. In order to safeguard the data, the authors assigned a unique identifying code to each interview, de-identified the transcripts, and used password-protected recording devices. Interview sessions continued until theoretical saturation was achieved and no new themes were derived.
A codebook was developed prior to the coding process, with questions from the interview guide serving as the first set of twelve deductive codes. Subsequently, two additional iterative codes were generated after the initial review of the transcribed interviews. After each transcript was individually reviewed, they were collectively coded and themes were derived from each transcript using thematic analysis with aid of the Dedoose software. Redundant codes within umbrella codes emerged, revealing commonalities and recurring themes across the interviews. Data analysis occurred concurrently with interviews, which served as a guide for data saturation. After the interview was completed, findings were shared with participants who further verified the accuracy of the results. The coding framework used is shown in Fig. .
Theme 1: Experience Switching from Providing Psychiatric Care to Internal Medicine Care Residents and fellows discussed a disorganized environment on medical floors with shift-scheduling issues, lack of support from the medical chief residents, a steep learning curve, and a lack of knowledge of COVID-19 protocol, particularly in the early stages of the pandemic. Many participants described an overwhelming patient load and staff shortages, covering more than 10 ICU-level patients. One resident felt that some deaths occurred as a direct result of poor physician- and nurse-to-patient ratios. The participants expressed shared feelings of anxiety about their deployment to work on the medicine floor. They reported a lack of orientation prior to deployment. One stated his only role was communicating with patients’ families and writing patient notes. Moreover, a few described being treated like intern residents due to unfamiliarity with treating critically ill patients. They treated me just like an intern even though I was a 4th year resident, but I don’t mind. I’m here to help the human being. (PGY-4, psychiatry resident) Those who completed a preliminary medicine year felt more confident with the work environment and reported reduced feelings of anxiety compared to colleagues that did not. Although all psychiatry residents are trained in internal medicine wards for their first year of residency, they reported feeling a steep learning curve due to feeling unprepared to care for the critically ill COVID-19 patients. I felt as a third-year psychiatry resident, I’m very far removed. The last time I was on medicine was way back in first year and I only did four months of inpatient medicine. I never dealt with this kind of thing. Infectious diseases and these sorts of things were not my forte, it’s not what I spent my time doing. I felt very ill-prepared for this, and I felt that I would literally just do whatever my senior tells me to do. (PGY-3, psychiatry resident) Most participants discussed the impact on all patients including those with COVID-19 and psychiatric disorders. They expressed feelings of guilt when abandoning their psychiatric patients to care for those admitted with COVID-19. Some residents described hurried patient rounds to decrease risk of exposure as a negative aspect. The conversion of psychiatry detoxification units to isolation units caused treatment interruptions and the rescheduling of numerous outpatient appointments which led to psychiatric illness exacerbation. All of the Psychiatric patients on the inpatient units had to be discharged or dispositioned to some other units or another hospital. It was like a mass discharge of psychiatry patients to clear out the floor in order to make space for COVID patients. The hospital pretty much almost reinvented itself as like this infectious unit. We felt that this was sort of unfair to the psychiatry patients because their treatment is getting interrupted for COVID. (PGY-4 psychiatry resident) Psychiatry residents noted that there were positive aspects of the experience, including the overall rewarding experiences of caring for suffering patients and contributing to the provision of medical care during a global health crisis. Some expressed enhanced confidence when dealing with future pandemics. Many reported adapting to the patient load as their teams developed better organization and the hospital received additional supplies. Theme 2: Social Impact of COVID-19 on Psychiatric Residents The described personal, familial, social, and financial impacts of the COVID-19 pandemic varied with participants. They discussed their difficulty treating dying and suffering patients, and their frustration with the isolation from family and friends. In addition, they endorsed fear for their own lives and their families. I have a family at home, and I had to isolate myself from them basically. So, we always spent our time separate from each other, even when eating dinner, they would eat dinner on the other side of the room. And the other thing is that even sleeping... so like you know my wife would have to sleep in my daughter’s room for about 3 weeks basically since that started and then two weeks after my last day on medicine. (PGY-3 psychiatry resident) The excessive work hours caused mental and physical exhaustion. Three voiced the expectation to display utmost resilience despite their emotional burden. The participants described shared emotions such as fear, anxiety, and uncertainty. Many referred to the initial wave as overwhelming. Following the redeployment, especially for a few weeks thereafter, I was having some terrible anxiety symptoms... it’s gotten better but like I would have panic attacks I would have nightmares... wake up just like a... cold sweat...that really, I mean it impacted certainly my job you know. I knew I wasn’t performing the way that I like and that I expected myself.” (PGY-4 psychiatry resident) As mortality rates climbed, several expressed feelings of hopelessness. Others shared how the constant death was demoralizing and virtually insurmountable. Many mentioned how the suffering and death of their patients left a profound impact on them and anticipated these emotions would follow them throughout their careers with fears of premature burnout. Yes, I did feel burnout for about two to four weeks during the peak. Being there for a while at the peak, weeks after having seen death after death, that is when I personally felt burnout. The burnout was also because regardless of what you did people were still dying or not getting better. So, it kind of felt like we’re doing all the hard work, but it was futile. (PGY-2 psychiatry resident) Interviewees voiced the devastation they felt helping the families and loved ones say their goodbyes on video calls. You’re watching patients die in isolation, no family members, there are no last words, there are no last moments, there’s nothing, no gifts, no visits, nothing. It’s one of the most heartbreaking ways to die...put on the ventilator and... the only connection this family has to this patient dying is a phone call from a resident saying this person’s passed away. (PGY-4, psychiatry resident) Moreover, they described fear of the possibility of their own impending death and new-onset anxiety as they approached the hospital daily. Two participants expressed frustration toward the chaos created by conflicting news articles and social media. We do this every day now we finally shined a light on it and you’re able to see that doctors are overworked, underpaid, underappreciated, it’s a very thankless job at times you know there’s a high-stress burden high emotional burden there’s a sacrifice of the prime years of your life. (PGY-3, psychiatry resident) A few noted a positive social support system within the hospital that included friends and colleagues, families that provided food, and more time spent making personal phone calls. Others conveyed feelings of gratitude for support from the public, such as PPE and food donations. Some participants reported feelings of job satisfaction and fulfillment. Lastly, two participants felt their deployment served as a “call to duty” and “a service to humanity.” Theme 3: Perceived Role of Hospital Administration in the Pandemic Participants expressed mixed opinions regarding the role of the administration. Throughout the pandemic, the most common shared challenge expressed was the lack of PPE. Several expressed distress resulting from repeated use of the same mask for weeks at a time and many ultimately purchased their own protection. Many expressed a preference for voluntary deployment with appropriate role assignments. I think it would be better if people went on a voluntary basis because I mean they made us go. We didn’t have a choice and I guess as residents we felt a little bit vulnerable because it’s like maybe attendings can say no but we couldn’t say no. (PGY-3, psychiatry resident) One participant expressed frustration toward the leadership, government, and the overall failure of the nation’s response to the pandemic. I just felt the insensitivity of leadership. I just felt that the administration would be more sensitive to residents’ individual needs. That got me to the core. (PGY-2, psychiatry resident) Some reported the administration responded inadequately and felt unsupported. Additionally, participants expressed a need for improved departmental debriefings. We have process groups but already the residents were not feeling well taken care of so I don’t think the process groups were honest, because I attended some of them, and I didn’t hear residents voice out the same things that they voice out in my office. It was maybe because of a lack of trust in the administration. (PGY-4 psychiatry resident) Another expressed a lack of financial compensation by the administration, specifically with respect to temporary housing relocation costs. Several residents expressed the fear of infecting loved ones as a reason for seeking alternative lodging during their deployment. If you don’t want to stay at home, the cheapest hotel, oh... discount, comes to $99 a night and the resident is supposed to pay for it and get a refund. Some residents don’t have that money. (PGY-4 psychiatry resident) Many expressed feelings of frustration at the general unpreparedness of the health care system regarding existing catastrophic scenarios while still supporting the effort put forth by the hospital’s administrative staff. I mean the administration was also not expecting this kind of pandemic. So, I mean they were trying their best to be the most helpful. (PGY-3, psychiatry resident) Additionally, participants expressed a need for improved departmental debriefings. In contrast, others stated the administration provided support in the form of daily meals and praised the mental health support provided in departmental meetings. Theme 4: The Impact of Psychiatry Training on COVID-19 Deployment Considering their training and insight in the field of psychiatry, some participants recognized personal symptoms of anxiety and depression. They applied clinical tools to themselves, such as trauma acknowledgment, compartmentalization, self-reflection, and mindfulness. Moreover, participants reported engaging in supportive conversations with colleagues and loved ones. As a psychiatrist, you talk to patients and tell them how to take care of their own mental health. Talking about how to do relaxation exercises or how to think positively, helped me apply some of the principles on my own. The fact that I also try to live in the present, that also helped me during this time. There must be a time during the day when you spend internalizing your feelings or expressing it to a loved one or just reflecting on what happened during the day. (PGY-2 psychiatry resident) All but one participant agreed that their core psychiatry training inadequately prepared them to care for COVID-19 patients. Few who recently completed 4 months of internal medicine rotation or an additional full year of residency training (preliminary or transitional years) felt more comfortable in their role. Two stated these brief rotations lacked specific critical care training. I was so unprepared, I just dealt with things as they came… there was no competency, preparation of any sort. I just drew from inner strength. Every day was new, and a challenge and you just deal with it. It’s not like you’re trained in your field and you’re like ok, good I can handle this. (PGY-4 psychiatry resident) Some discussed specific communication skills such as listening, paraphrasing, summarizing, questioning, and non-verbal communication, acquired during their psychiatry training which helped them to better communicate with the patients and their families. Residents stated they allocated ample time to console those affected. They described using supportive language and therapeutic skills to help those impacted by COVID-19 cope in healthy ways. Theme 5: The Implications of the COVID-19 Pandemic on Training and Future Clinical Practice Most participants reported the cancelation of all academic activities such as didactics, journal clubs, grand rounds, and rotations. However, a few participants near the end of their residency did not experience interruptions to their training. Other participants identified exposure to telepsychiatry as an advantage during the pandemic. Participants endorsed improved hospital preparedness for future crises. One advocated a refresher in fundamental medical training such as basic cardiovascular life support and advanced cardiovascular life support immediately prior to deployment. Some reported future integration of telemedicine in clinical practice. We were able to use different software to have video conversations or do phone sessions. So that was kind of a learning experience for the future as to how to do telepsychiatry or telemedicine. In terms of my own experience, I feel that when a patient comes to the clinic or they come to the hospital and you see them in person, you can get more findings as compared to when you talk to them on the phone or on a video. (PGY-3 psychiatry resident) A few suggested advocating for mental health in all populations including healthcare providers. Others endorsed creating better support systems for disadvantaged populations in particular, those who come from low-socioeconomic backgrounds and individuals who face language barriers. I’ve witnessed in COVID the ones most severely affected by this being low-income communities of color and even when it comes to mental health care the same communities being impacted the same way. I think as a psychiatrist it’s an obligation upon you, that if you truly really have a passion for mental health and mental health care then you need to advocate for a better mental health system. (PGY-4 psychiatry resident)
Residents and fellows discussed a disorganized environment on medical floors with shift-scheduling issues, lack of support from the medical chief residents, a steep learning curve, and a lack of knowledge of COVID-19 protocol, particularly in the early stages of the pandemic. Many participants described an overwhelming patient load and staff shortages, covering more than 10 ICU-level patients. One resident felt that some deaths occurred as a direct result of poor physician- and nurse-to-patient ratios. The participants expressed shared feelings of anxiety about their deployment to work on the medicine floor. They reported a lack of orientation prior to deployment. One stated his only role was communicating with patients’ families and writing patient notes. Moreover, a few described being treated like intern residents due to unfamiliarity with treating critically ill patients. They treated me just like an intern even though I was a 4th year resident, but I don’t mind. I’m here to help the human being. (PGY-4, psychiatry resident) Those who completed a preliminary medicine year felt more confident with the work environment and reported reduced feelings of anxiety compared to colleagues that did not. Although all psychiatry residents are trained in internal medicine wards for their first year of residency, they reported feeling a steep learning curve due to feeling unprepared to care for the critically ill COVID-19 patients. I felt as a third-year psychiatry resident, I’m very far removed. The last time I was on medicine was way back in first year and I only did four months of inpatient medicine. I never dealt with this kind of thing. Infectious diseases and these sorts of things were not my forte, it’s not what I spent my time doing. I felt very ill-prepared for this, and I felt that I would literally just do whatever my senior tells me to do. (PGY-3, psychiatry resident) Most participants discussed the impact on all patients including those with COVID-19 and psychiatric disorders. They expressed feelings of guilt when abandoning their psychiatric patients to care for those admitted with COVID-19. Some residents described hurried patient rounds to decrease risk of exposure as a negative aspect. The conversion of psychiatry detoxification units to isolation units caused treatment interruptions and the rescheduling of numerous outpatient appointments which led to psychiatric illness exacerbation. All of the Psychiatric patients on the inpatient units had to be discharged or dispositioned to some other units or another hospital. It was like a mass discharge of psychiatry patients to clear out the floor in order to make space for COVID patients. The hospital pretty much almost reinvented itself as like this infectious unit. We felt that this was sort of unfair to the psychiatry patients because their treatment is getting interrupted for COVID. (PGY-4 psychiatry resident) Psychiatry residents noted that there were positive aspects of the experience, including the overall rewarding experiences of caring for suffering patients and contributing to the provision of medical care during a global health crisis. Some expressed enhanced confidence when dealing with future pandemics. Many reported adapting to the patient load as their teams developed better organization and the hospital received additional supplies.
The described personal, familial, social, and financial impacts of the COVID-19 pandemic varied with participants. They discussed their difficulty treating dying and suffering patients, and their frustration with the isolation from family and friends. In addition, they endorsed fear for their own lives and their families. I have a family at home, and I had to isolate myself from them basically. So, we always spent our time separate from each other, even when eating dinner, they would eat dinner on the other side of the room. And the other thing is that even sleeping... so like you know my wife would have to sleep in my daughter’s room for about 3 weeks basically since that started and then two weeks after my last day on medicine. (PGY-3 psychiatry resident) The excessive work hours caused mental and physical exhaustion. Three voiced the expectation to display utmost resilience despite their emotional burden. The participants described shared emotions such as fear, anxiety, and uncertainty. Many referred to the initial wave as overwhelming. Following the redeployment, especially for a few weeks thereafter, I was having some terrible anxiety symptoms... it’s gotten better but like I would have panic attacks I would have nightmares... wake up just like a... cold sweat...that really, I mean it impacted certainly my job you know. I knew I wasn’t performing the way that I like and that I expected myself.” (PGY-4 psychiatry resident) As mortality rates climbed, several expressed feelings of hopelessness. Others shared how the constant death was demoralizing and virtually insurmountable. Many mentioned how the suffering and death of their patients left a profound impact on them and anticipated these emotions would follow them throughout their careers with fears of premature burnout. Yes, I did feel burnout for about two to four weeks during the peak. Being there for a while at the peak, weeks after having seen death after death, that is when I personally felt burnout. The burnout was also because regardless of what you did people were still dying or not getting better. So, it kind of felt like we’re doing all the hard work, but it was futile. (PGY-2 psychiatry resident) Interviewees voiced the devastation they felt helping the families and loved ones say their goodbyes on video calls. You’re watching patients die in isolation, no family members, there are no last words, there are no last moments, there’s nothing, no gifts, no visits, nothing. It’s one of the most heartbreaking ways to die...put on the ventilator and... the only connection this family has to this patient dying is a phone call from a resident saying this person’s passed away. (PGY-4, psychiatry resident) Moreover, they described fear of the possibility of their own impending death and new-onset anxiety as they approached the hospital daily. Two participants expressed frustration toward the chaos created by conflicting news articles and social media. We do this every day now we finally shined a light on it and you’re able to see that doctors are overworked, underpaid, underappreciated, it’s a very thankless job at times you know there’s a high-stress burden high emotional burden there’s a sacrifice of the prime years of your life. (PGY-3, psychiatry resident) A few noted a positive social support system within the hospital that included friends and colleagues, families that provided food, and more time spent making personal phone calls. Others conveyed feelings of gratitude for support from the public, such as PPE and food donations. Some participants reported feelings of job satisfaction and fulfillment. Lastly, two participants felt their deployment served as a “call to duty” and “a service to humanity.”
Participants expressed mixed opinions regarding the role of the administration. Throughout the pandemic, the most common shared challenge expressed was the lack of PPE. Several expressed distress resulting from repeated use of the same mask for weeks at a time and many ultimately purchased their own protection. Many expressed a preference for voluntary deployment with appropriate role assignments. I think it would be better if people went on a voluntary basis because I mean they made us go. We didn’t have a choice and I guess as residents we felt a little bit vulnerable because it’s like maybe attendings can say no but we couldn’t say no. (PGY-3, psychiatry resident) One participant expressed frustration toward the leadership, government, and the overall failure of the nation’s response to the pandemic. I just felt the insensitivity of leadership. I just felt that the administration would be more sensitive to residents’ individual needs. That got me to the core. (PGY-2, psychiatry resident) Some reported the administration responded inadequately and felt unsupported. Additionally, participants expressed a need for improved departmental debriefings. We have process groups but already the residents were not feeling well taken care of so I don’t think the process groups were honest, because I attended some of them, and I didn’t hear residents voice out the same things that they voice out in my office. It was maybe because of a lack of trust in the administration. (PGY-4 psychiatry resident) Another expressed a lack of financial compensation by the administration, specifically with respect to temporary housing relocation costs. Several residents expressed the fear of infecting loved ones as a reason for seeking alternative lodging during their deployment. If you don’t want to stay at home, the cheapest hotel, oh... discount, comes to $99 a night and the resident is supposed to pay for it and get a refund. Some residents don’t have that money. (PGY-4 psychiatry resident) Many expressed feelings of frustration at the general unpreparedness of the health care system regarding existing catastrophic scenarios while still supporting the effort put forth by the hospital’s administrative staff. I mean the administration was also not expecting this kind of pandemic. So, I mean they were trying their best to be the most helpful. (PGY-3, psychiatry resident) Additionally, participants expressed a need for improved departmental debriefings. In contrast, others stated the administration provided support in the form of daily meals and praised the mental health support provided in departmental meetings.
Considering their training and insight in the field of psychiatry, some participants recognized personal symptoms of anxiety and depression. They applied clinical tools to themselves, such as trauma acknowledgment, compartmentalization, self-reflection, and mindfulness. Moreover, participants reported engaging in supportive conversations with colleagues and loved ones. As a psychiatrist, you talk to patients and tell them how to take care of their own mental health. Talking about how to do relaxation exercises or how to think positively, helped me apply some of the principles on my own. The fact that I also try to live in the present, that also helped me during this time. There must be a time during the day when you spend internalizing your feelings or expressing it to a loved one or just reflecting on what happened during the day. (PGY-2 psychiatry resident) All but one participant agreed that their core psychiatry training inadequately prepared them to care for COVID-19 patients. Few who recently completed 4 months of internal medicine rotation or an additional full year of residency training (preliminary or transitional years) felt more comfortable in their role. Two stated these brief rotations lacked specific critical care training. I was so unprepared, I just dealt with things as they came… there was no competency, preparation of any sort. I just drew from inner strength. Every day was new, and a challenge and you just deal with it. It’s not like you’re trained in your field and you’re like ok, good I can handle this. (PGY-4 psychiatry resident) Some discussed specific communication skills such as listening, paraphrasing, summarizing, questioning, and non-verbal communication, acquired during their psychiatry training which helped them to better communicate with the patients and their families. Residents stated they allocated ample time to console those affected. They described using supportive language and therapeutic skills to help those impacted by COVID-19 cope in healthy ways.
Most participants reported the cancelation of all academic activities such as didactics, journal clubs, grand rounds, and rotations. However, a few participants near the end of their residency did not experience interruptions to their training. Other participants identified exposure to telepsychiatry as an advantage during the pandemic. Participants endorsed improved hospital preparedness for future crises. One advocated a refresher in fundamental medical training such as basic cardiovascular life support and advanced cardiovascular life support immediately prior to deployment. Some reported future integration of telemedicine in clinical practice. We were able to use different software to have video conversations or do phone sessions. So that was kind of a learning experience for the future as to how to do telepsychiatry or telemedicine. In terms of my own experience, I feel that when a patient comes to the clinic or they come to the hospital and you see them in person, you can get more findings as compared to when you talk to them on the phone or on a video. (PGY-3 psychiatry resident) A few suggested advocating for mental health in all populations including healthcare providers. Others endorsed creating better support systems for disadvantaged populations in particular, those who come from low-socioeconomic backgrounds and individuals who face language barriers. I’ve witnessed in COVID the ones most severely affected by this being low-income communities of color and even when it comes to mental health care the same communities being impacted the same way. I think as a psychiatrist it’s an obligation upon you, that if you truly really have a passion for mental health and mental health care then you need to advocate for a better mental health system. (PGY-4 psychiatry resident)
The COVID-19 pandemic represents a public health crisis with potential negative impacts on patient care, professionalism, and physicians’ well-being and safety. Increased preparedness and current pandemic experiences will decrease the psychosocial hardship of healthcare providers in future crises. This qualitative study illustrates the psychiatry resident and fellow experience, challenging aspects, and potential health care improvements relating to the COVID-19 pandemic. Overall, participants voiced stress related to limited training in ICU patient care, the magnitude of mortality, scarcity of PPE, fear of self-contamination and transmission to loved ones, isolation from social support, and increased symptoms of anxiety and insomnia similarly to findings in other studies . Several concerns emerged from the interviews regarding orientation, appropriate role assignment, and adequate resource allocation. In fact, participants in this study reported the lack of orientation to ICU protocols as a challenging aspect of the deployment. Moreover, they felt their additional role assignment was not well suited to their current clinical skill set. Many participants believed their psychiatry training was more beneficial in providing emotional support for staff, patients, and patient families instead of direct medical management of COVID-19 patients. A review on previous pandemics recommended promoting human connections through patient, family, and healthcare provider communication and individualized care planning focused on supporting patients’ advanced care wishes and beliefs . Participants of the current study and Kentish-Barnes et al. felt phone communication with family was unsatisfactory and created a barrier to important end-of-life discussions and breaking bad news. A recent article noted the benefits of designing regularly updated processes and protocols, critical care training, and prioritizing the assignment of physicians most suited for these tasks . These findings correspond with those reported by the participants in the current study. Pandemic deployment role assignment of psychiatrists in training should include the provision of psychosocial support and communication with colleagues, families, and patients as it represents a significant part of their expertise. The authors believe implementing such practices may minimize physician stress in similar scenarios. Participants also suggested the implementation of support groups for processing their emotions. In particular, they stated that meetings with their program director provided psychological support and a means for debriefing. The psychiatry residents in this study stated that their knowledge and practice of self-care techniques provided some relief from pandemic-related stress for themselves and their colleagues. A recent study developed “digital packages” with information and activities for health care providers to mitigate stress during COVID-19 . Healthcare providers may apply this approach in future crises. This intervention may also include a mental health hotline accessible to all providers. Ultimately, the authors recommend that early support in a private, non-biased environment may result in positive psychosocial outcomes in future crises. Although sampling across many residents within various specialties may yield a more robust perspective of COVID-19 experiences, interviewing only psychiatry residents provides a unique perspective of treating patients within a vastly different healthcare environment from their usual practice. The psychiatry residents in this study were deployed to care for COVID-19 patients in an urban healthcare setting, with high patient load. Therefore, this study may not be generalizable in other healthcare settings. In conclusion, psychiatry residents and fellows described the challenges of caring for COVID-19 patients and the overwhelmingly negative impact on their training. Future research should focus on the further development of a pandemic protocol based on the current experiences across all specialties. The authors’ recommend that healthcare facilities remain updated on adequate PPE stocks for future pandemics. The knowledge gained from this study will help establish the role of the psychiatrist not only in future crises but in healthcare as a whole.
|
Mass spectrometry-based proteomic strategy for ecchymotic skin examination in forensic pathology
|
e8aee654-9f24-4fc8-8988-bf31aab58543
|
10104867
|
Pathology[mh]
|
Among the recurring tasks in forensic pathology, the examination and characterization of wounds are particularly relevant, especially in cases of violent death. Skin wounds—which include incisions, burns, and contusions—are among the most common in forensic practice , . If the individual is alive at the time of the injury, blood pressure produces an extravascular collection of blood beneath the intact epidermis—known as hemorrhage or bruise—following arterial, capillary, or venous damage. Bruises are mainly caused by blunt forces directed to the skin surface . The distinction between vital skin wounds—i.e., wounds that predate the death event—and lesions that occur after death is a crucial goal in forensic pathology . In non-putrefied corpses, the evidence of hemorrhagic tissue infiltration in the skin lesion is commonly considered a macroscopic sign of vitality, whereas its absence indicates that the lesion is likely post-mortem . However, hemorrhages can be confused with livor mortis, and pre-existing contusions can be accentuated by death since hemoglobin filters through the tissues – . To identify vital skin wounds, conventional histological evaluation is routinely used, whereas immunohistochemistry and immunofluorescence methods are only applied in more complex cases. However, neither technique is useful when the injury occurs immediately before death. Furthermore, no reliable method is available to identify vital wounds in decomposed bodies, where the macroscopic appearance of the skin is completely subverted . Once the vitality of the wound has been defined, the assessment of the wound age—i.e., the time between the trauma and death—is fundamental information for establishing the causal relationship between the two events , – . In the last decades, progress has been made in wound-age estimation, although no reproducible system or model has been proposed as definitive , . In this context, proteomics could represent a promising approach for wound examination to overcome the known limitations of immunohistochemistry, i.e., low accuracy, low reproducibility and operator bias. In forensic investigation and legal medicine, proteomics is in its infancy, but it can be a confirmatory and orthogonal technique to well-established DNA-based methods, as well as an additional strategy for revealing useful information and facing new analytical challenges . The few attempts at MS-proteomic analysis for the investigation of wound vitality have used a low-throughput approach based on two-dimensional gel electrophoresis separation followed by matrix-assisted laser desorption/ionization-time of flight-mass spectrometry (MALDI-TOF). Tarran et al. studied excision wounds on full-thickness skin in a rat model: twenty-six spots from the 2D-PAGE gel were identified by MALDI-TOF analysis, highlighting hemoglobin as the protein most subject to changes. In the same year, Pollins et al. performed 2D difference gel electrophoresis (2D-DIGE) on a protein extract from normal human partial-thickness skin and burn wounds; forty-six proteins were identified by MALDI-TOF-MS, with some potentially involved in healing. Advances in liquid chromatography-mass spectrometry (MS)-based proteomics have recently attracted the attention of forensic scientists and pathologists to answer complex forensic questions and strengthen scientific evidence for legal cases , – . However, shotgun proteomics with high-throughput LC-MS/MS analysis of skin samples is still an unexplored approach for wound characterization and dating, which are still mostly assessed through conventional histological evaluations. High throughput MS-based proteomic strategies can therefore be exploited to develop reliable analytical approaches to overcome the limitations associated with traditional methods for wound examination and dating. In the development of a bottom-up LC-MS-based proteomic method, a fundamental step is sample treatment: although a standardized workflow involves protein extraction, proteolytic digestion and sample clean-up, it needs to be tailored to the specific application in terms of matrix type, sample size, protein amount and solubility, co-extraction of interfering compounds, instrumental analysis, etc. – . The sample treatment protocol has to ensure not only high extraction and purification efficiency and high protein coverage, but also good precision and, possibly, high-throughput performance. To the best of our knowledge, Bliss et al. is the only study developing a multi-step sample preparation protocol and on-line 2D-LC-MS platform for shotgun proteomic investigation on full-thickness skin; it is only aimed at maximizing proteome coverage compared to a standard laboratory protocol, without a definite medical or forensic goal. It should be noted that the devised procedure involves a time-consuming tissue cryosectioning previous to mechanical homogenization and a chromatographic separation lasting 8 h to enhance protein coverage. The authors identified more than 2000 proteins, but no results about precision of the analytical strategy were shown. The present study represents the first time that a LC-HRMS-based method was devised for shotgun proteomics on autoptic full-thickness human skin with particular attention on the development of a simple and high-throughput sample treatment protocol. The method was then applied for differential analysis between normal and ecchymotic tissues to gain insights into the proteomic profiles of bruises in search for markers of wound vitality and age. Chemicals Deionized water was obtained by Milli-Q Element water-purification system (Millipore, Bedford, MA, USA). Urea was purchased from VWR International (Milan, Italy). Hexane (≥ 99%), trypsin sequence-grade, 1,4-dithioerythritol (DTE), iodoacetamide (IAA), acetonitrile and formic acid were purchased from Sigma Aldrich (Milan, Italy). Sample collection Full-thickness human skin samples, comprising epidermis, dermis, and subcutaneous tissue, were collected at the Institute of Legal Medicine in Milano—where about 700 autopsies are performed every year—in accordance with article 41 of the Italian National Police Mortuary Regulation (September 10, 1990; n° 285) and the Regio decreto (1933 n. 1592, art 32). Tissues were taken in order to answer judicial issues raised by the prosecution performing further analysis. Sample collection was performed in accordance with the Declaration of Helsinki. Samples were taken only from people younger than 65, in good health and in traumatic death (fall from height, traffic accidents, homicides). For each case, both ecchymosis (E) and adjacent normal tissue (N, control sample) were collected. Excision of small fragments of cadaveric skin was performed using a scalpel. A number of 18 couples of skin samples were investigated for cases with unknown wound dating. For cases 1–12, the excised fragments of ecchymotic and normal tissue were split in two and each part was subjected to the same protein extraction procedure (Table ). In addition, 7 cases having known age of ecchymosis wound were also analyzed; in particular, these samples were grouped in two classes: 5 belonged to Group G0 (death within 1 h after trauma) and 2 to Group G1 (death within 1–12 h after trauma) (Table ). Skin samples were stored in plastic tubes at -80 °C until analysis. Sample treatment Ecchymotic and normal skin samples were treated in series. In particular, around 100 mg of skin were transferred into a clean tube for the defatting step: 500 µL of hexane were added and the sample was vortexed for 5 min. Then, after complete hexane evaporation, the skin sample was weighted to assess weight reduction due to loss of fat components. The sample was pulverized under liquid nitrogen by using a homemade stainless steel closed mortar grinder. The pulverized sample was put into a 2 mL reinforced tubes prefilled with inert 2.8 mm stainless beads (MK28-R, Precellys Lysing Kit) and 1 mL of urea 8 M aqueous solution was added. Homogenization was carried out by a Precellys Evolution tissue homogenizer (Bertin Instruments) equipped with cooling unit (Cryolys Evolution), performing 5 cycles of 30 s at 8500 rpm, with 40 s break between cycles. Up to 24 samples can be processed in parallel. Then, the homogenate was transferred into a 1.5 mL centrifuge tube, it was centrifuged at 13,200 rpm for 15 min at 4 °C and the supernatant was retained. The concentration of total extracted protein was then quantified by absorption spectroscopy at 280 nm using a Cary 400 spectrophotometer (Variant) and a 150 µL quartz cuvette. Two-way analysis of variance with interaction (two-way ANOVA) was carried out with Statgraphics Centurion software. The ecchymotic and normal full-thickness skin samples, collected from cases with unknown wound dating, were pooled based on protein concentration to overcome individual variability, so that each pooled sample contributes the same concentration. In details, five different pools were made for both ecchymotic (1E, 2E, 3E, 4E and 5E) and normal skin (1N, 2N, 3N, 4N and 5N), each one with 100 µg of proteins extracted from six cases. Samples 4 and 5 were made up pooling together the proteins obtained from the second extraction of cases 1–12, as detailed in Table . The protein content of each pool was quantified using Bradford assay. Concerning samples collected from cases having known age of wound, they were analyzed individually. For proteomic analysis prior to proteolysis, all samples were reduced with 13 mM DTE (15 min at 50 °C), alkylated with 26 mM IAA (30 min at RT, in the dark) and quenched with 1 mM aqueous methylamine . The protein mixtures were diluted in 20 mM ammonium bicarbonate pH 8, and digested overnight with sequence-grade trypsin enzyme, at 37 °C using a protein:trypsin ratio of 20:1 . The digestion was blocked by acidification of the samples to inactivate the enzyme. Nano-liquid chromatography/high-resolution mass spectrometry analysis Before label-free shotgun MS analysis, the proteolytic digests were desalted using Zip-Tip C18 as described in Vernocchi et al. . All samples were analyzed using a Dionex Ultimate 3000 nano-LC system (Sunnyvale CA, USA) connected to Orbitrap Fusion™ Tribrid™ Mass Spectrometer (Thermo Scientific, Bremen, Germany) equipped with nano electrospray ion source. Peptide mixtures were pre-concentrated onto an Acclaim PepMap 100–100 μm x 2 cm C18 (Thermo Scientific) and separated on EASY-Spray column ES802A, 15 cm × 75 μm ID packed with Thermo Scientific Acclaim PepMap RSLC C18, 3 μm, 100 Å using mobile phase A (0.1% formic acid in water) and mobile phase B (0.1% formic acid in acetonitrile 20/80, v/v) at a flow rate of 0.300 μL/min. The temperature was set to 35 °C and the samples were injected in duplicates. The MS was operated in positive and data-dependent acquisition mode to automatically alternate between a full scan (m/z 375–1500) in the Orbitrap, at resolution 120,000 (at 200 m /z ), cycle time 3 s between master scans, and subsequent HCD MS/MS with collision energy set at 35 eV. The acquired raw files were subjected to data analysis using MaxQuant software (version 1.6.0.1, https://maxquant.org/ ). The searches were performed with the built-in Andromeda search engine against the reference Homo sapiens proteome (updated on 04/2021; 77,046 sequences), from Uniprot ( https://www.uniprot.org/proteomes ). The following settings were selected for analysis: strict trypsin specificity allowing up to two missed cleavages, the minimum peptide length was seven amino acids, carbamidomethylation of cysteine was set as a fixed modification. Oxidation of methionine, deamidation of asparagine and glutamine and acetylation of the protein N-terminal were set as variable modifications. Only peptides containing at least seven amino acids were accepted, and a False Discovery Rate (FDR) of 0.01 was applied to both peptides and proteins. 'Match between runs' was enabled with a match time window of 0.7 min and an alignment time window of 20 min. Relative, label-free quantification (LFQ) of proteins using a minimum ratio count of one, was performed in MaxQuant, as described previously . Bioinformatics The protein groups identified by MaxQuant were analyzed by the Perseus software (version 1.5.5.3) . Hits to the reverse database were eliminated and the LFQ intensities were converted to a log scale (log 2 ). Only proteins present and quantified in at least one out of two repeats were considered as positively identified in a sample. A Student’s t-test (FDR ≤ 0.05) was carried out to identify proteins differentially present among the different conditions. Proteins were considered to be differentially present if they were present only in one condition or showed significant t-test difference (FDR ≤ 0.05). The precision of the protein extraction was determined comparing dataset deriving from the first and the second protein extraction available for cases 1–12, 1E versus 4E, 1N versus 4N, 2E versus 5E and 2N versus 5N, in terms of number of identified proteins and LFQ intensity of the signals. The Pearson correlation coefficient values were calculated using the log 2 LFQ intensity for the comparisons reported above. Western Blot analysis The proteins extract pools used for the MS experiments were precipitated in 10% trichloroacetic acid (TCA) to remove urea, re-solubilized in sample buffer (2% sodium dodecyl sulfate, 5% 2-mercaptoethanol, 10% glycerol, and 0.05% bromophenol blue in 0.0625 M Tris–HCl, pH 6.8), and separated by 12% SDS polyacrylamide electrophoresis (20–44 µg of total protein content per lane depending on the antigen), along with a pre-stained protein marker (PanReac, Applichem). Proteins were then transferred to nitrocellulose membranes (Amersham ™ Protran ™, GE Healthcare) by standard methods. The membranes were incubated overnight at 4 °C with 3% skimmed milk (Millipore). They were then incubated with the primary antibodies for either 2 h at room temperature (for the anti-Glycophorin A antibody) or for 12 h at 4 °C (for the anti-GAPDH antibody). Finally, the membranes were incubated for 2 h at room temperature with an HRP-conjugated anti-Rabbit IgG (Sigma-Aldrich, A0545, 1:1500). Immunoblots were visualized by chemiluminescence with the AppliChem HRP substrate (A3417, 1200A-B) using Chemidoc (Bio-Rad). Protein densitometry was performed using the Image Lab 6.0.1 software (Bio-Rad). GAPDH was used as an internal control to verify equal protein loading. The primary antibodies were rabbit anti-CD235a (Glycophorin A) (Thermo Fisher Scientific, PA5-85,882, 1:1000) and rabbit anti-GAPDH (Sigma Aldrich, HPA040067, 1:2500). Ethics approval This article does not contain any studies with human living participants or animals performed by any of the authors. Informed consent for the skin samples was not needed (Police Mortuary Regulations, DPR 09/10/1990 n° 285, art. 41). Data were acquired as part of a forensic judicial investigation and in accordance to Italian Police Mortuary Regulation (Mortuary Police Regulations, Presidential Decree 285, September 10, 1990). In accordance with Italian law, ethical approval is not required in these cases, however the anonymity of the subjects must be guaranteed. Moreover, all the studies conducted followed the guidelines provided by Legislation and the National Bioethical Committee and guidelines by Helsinki Declaration. Deionized water was obtained by Milli-Q Element water-purification system (Millipore, Bedford, MA, USA). Urea was purchased from VWR International (Milan, Italy). Hexane (≥ 99%), trypsin sequence-grade, 1,4-dithioerythritol (DTE), iodoacetamide (IAA), acetonitrile and formic acid were purchased from Sigma Aldrich (Milan, Italy). Full-thickness human skin samples, comprising epidermis, dermis, and subcutaneous tissue, were collected at the Institute of Legal Medicine in Milano—where about 700 autopsies are performed every year—in accordance with article 41 of the Italian National Police Mortuary Regulation (September 10, 1990; n° 285) and the Regio decreto (1933 n. 1592, art 32). Tissues were taken in order to answer judicial issues raised by the prosecution performing further analysis. Sample collection was performed in accordance with the Declaration of Helsinki. Samples were taken only from people younger than 65, in good health and in traumatic death (fall from height, traffic accidents, homicides). For each case, both ecchymosis (E) and adjacent normal tissue (N, control sample) were collected. Excision of small fragments of cadaveric skin was performed using a scalpel. A number of 18 couples of skin samples were investigated for cases with unknown wound dating. For cases 1–12, the excised fragments of ecchymotic and normal tissue were split in two and each part was subjected to the same protein extraction procedure (Table ). In addition, 7 cases having known age of ecchymosis wound were also analyzed; in particular, these samples were grouped in two classes: 5 belonged to Group G0 (death within 1 h after trauma) and 2 to Group G1 (death within 1–12 h after trauma) (Table ). Skin samples were stored in plastic tubes at -80 °C until analysis. Ecchymotic and normal skin samples were treated in series. In particular, around 100 mg of skin were transferred into a clean tube for the defatting step: 500 µL of hexane were added and the sample was vortexed for 5 min. Then, after complete hexane evaporation, the skin sample was weighted to assess weight reduction due to loss of fat components. The sample was pulverized under liquid nitrogen by using a homemade stainless steel closed mortar grinder. The pulverized sample was put into a 2 mL reinforced tubes prefilled with inert 2.8 mm stainless beads (MK28-R, Precellys Lysing Kit) and 1 mL of urea 8 M aqueous solution was added. Homogenization was carried out by a Precellys Evolution tissue homogenizer (Bertin Instruments) equipped with cooling unit (Cryolys Evolution), performing 5 cycles of 30 s at 8500 rpm, with 40 s break between cycles. Up to 24 samples can be processed in parallel. Then, the homogenate was transferred into a 1.5 mL centrifuge tube, it was centrifuged at 13,200 rpm for 15 min at 4 °C and the supernatant was retained. The concentration of total extracted protein was then quantified by absorption spectroscopy at 280 nm using a Cary 400 spectrophotometer (Variant) and a 150 µL quartz cuvette. Two-way analysis of variance with interaction (two-way ANOVA) was carried out with Statgraphics Centurion software. The ecchymotic and normal full-thickness skin samples, collected from cases with unknown wound dating, were pooled based on protein concentration to overcome individual variability, so that each pooled sample contributes the same concentration. In details, five different pools were made for both ecchymotic (1E, 2E, 3E, 4E and 5E) and normal skin (1N, 2N, 3N, 4N and 5N), each one with 100 µg of proteins extracted from six cases. Samples 4 and 5 were made up pooling together the proteins obtained from the second extraction of cases 1–12, as detailed in Table . The protein content of each pool was quantified using Bradford assay. Concerning samples collected from cases having known age of wound, they were analyzed individually. For proteomic analysis prior to proteolysis, all samples were reduced with 13 mM DTE (15 min at 50 °C), alkylated with 26 mM IAA (30 min at RT, in the dark) and quenched with 1 mM aqueous methylamine . The protein mixtures were diluted in 20 mM ammonium bicarbonate pH 8, and digested overnight with sequence-grade trypsin enzyme, at 37 °C using a protein:trypsin ratio of 20:1 . The digestion was blocked by acidification of the samples to inactivate the enzyme. Before label-free shotgun MS analysis, the proteolytic digests were desalted using Zip-Tip C18 as described in Vernocchi et al. . All samples were analyzed using a Dionex Ultimate 3000 nano-LC system (Sunnyvale CA, USA) connected to Orbitrap Fusion™ Tribrid™ Mass Spectrometer (Thermo Scientific, Bremen, Germany) equipped with nano electrospray ion source. Peptide mixtures were pre-concentrated onto an Acclaim PepMap 100–100 μm x 2 cm C18 (Thermo Scientific) and separated on EASY-Spray column ES802A, 15 cm × 75 μm ID packed with Thermo Scientific Acclaim PepMap RSLC C18, 3 μm, 100 Å using mobile phase A (0.1% formic acid in water) and mobile phase B (0.1% formic acid in acetonitrile 20/80, v/v) at a flow rate of 0.300 μL/min. The temperature was set to 35 °C and the samples were injected in duplicates. The MS was operated in positive and data-dependent acquisition mode to automatically alternate between a full scan (m/z 375–1500) in the Orbitrap, at resolution 120,000 (at 200 m /z ), cycle time 3 s between master scans, and subsequent HCD MS/MS with collision energy set at 35 eV. The acquired raw files were subjected to data analysis using MaxQuant software (version 1.6.0.1, https://maxquant.org/ ). The searches were performed with the built-in Andromeda search engine against the reference Homo sapiens proteome (updated on 04/2021; 77,046 sequences), from Uniprot ( https://www.uniprot.org/proteomes ). The following settings were selected for analysis: strict trypsin specificity allowing up to two missed cleavages, the minimum peptide length was seven amino acids, carbamidomethylation of cysteine was set as a fixed modification. Oxidation of methionine, deamidation of asparagine and glutamine and acetylation of the protein N-terminal were set as variable modifications. Only peptides containing at least seven amino acids were accepted, and a False Discovery Rate (FDR) of 0.01 was applied to both peptides and proteins. 'Match between runs' was enabled with a match time window of 0.7 min and an alignment time window of 20 min. Relative, label-free quantification (LFQ) of proteins using a minimum ratio count of one, was performed in MaxQuant, as described previously . The protein groups identified by MaxQuant were analyzed by the Perseus software (version 1.5.5.3) . Hits to the reverse database were eliminated and the LFQ intensities were converted to a log scale (log 2 ). Only proteins present and quantified in at least one out of two repeats were considered as positively identified in a sample. A Student’s t-test (FDR ≤ 0.05) was carried out to identify proteins differentially present among the different conditions. Proteins were considered to be differentially present if they were present only in one condition or showed significant t-test difference (FDR ≤ 0.05). The precision of the protein extraction was determined comparing dataset deriving from the first and the second protein extraction available for cases 1–12, 1E versus 4E, 1N versus 4N, 2E versus 5E and 2N versus 5N, in terms of number of identified proteins and LFQ intensity of the signals. The Pearson correlation coefficient values were calculated using the log 2 LFQ intensity for the comparisons reported above. The proteins extract pools used for the MS experiments were precipitated in 10% trichloroacetic acid (TCA) to remove urea, re-solubilized in sample buffer (2% sodium dodecyl sulfate, 5% 2-mercaptoethanol, 10% glycerol, and 0.05% bromophenol blue in 0.0625 M Tris–HCl, pH 6.8), and separated by 12% SDS polyacrylamide electrophoresis (20–44 µg of total protein content per lane depending on the antigen), along with a pre-stained protein marker (PanReac, Applichem). Proteins were then transferred to nitrocellulose membranes (Amersham ™ Protran ™, GE Healthcare) by standard methods. The membranes were incubated overnight at 4 °C with 3% skimmed milk (Millipore). They were then incubated with the primary antibodies for either 2 h at room temperature (for the anti-Glycophorin A antibody) or for 12 h at 4 °C (for the anti-GAPDH antibody). Finally, the membranes were incubated for 2 h at room temperature with an HRP-conjugated anti-Rabbit IgG (Sigma-Aldrich, A0545, 1:1500). Immunoblots were visualized by chemiluminescence with the AppliChem HRP substrate (A3417, 1200A-B) using Chemidoc (Bio-Rad). Protein densitometry was performed using the Image Lab 6.0.1 software (Bio-Rad). GAPDH was used as an internal control to verify equal protein loading. The primary antibodies were rabbit anti-CD235a (Glycophorin A) (Thermo Fisher Scientific, PA5-85,882, 1:1000) and rabbit anti-GAPDH (Sigma Aldrich, HPA040067, 1:2500). This article does not contain any studies with human living participants or animals performed by any of the authors. Informed consent for the skin samples was not needed (Police Mortuary Regulations, DPR 09/10/1990 n° 285, art. 41). Data were acquired as part of a forensic judicial investigation and in accordance to Italian Police Mortuary Regulation (Mortuary Police Regulations, Presidential Decree 285, September 10, 1990). In accordance with Italian law, ethical approval is not required in these cases, however the anonymity of the subjects must be guaranteed. Moreover, all the studies conducted followed the guidelines provided by Legislation and the National Bioethical Committee and guidelines by Helsinki Declaration. Development of the protein extraction procedure The study initially aimed to develop a high-throughput, quick and simple procedure for extracting proteins from aliquots of skin tissue to be identified by MS/MS. The coriaceous nature of full-thickness skin, which includes subcutaneous fat, dermis, and epidermis, makes protein extraction particularly challenging due to high lipid content, insolubility, and extensive protein cross-linking . A first attempt to pulverize a small amount (about 100 mg) of skin samples frozen in liquid N 2 with conventional mortar and pestle failed, as it led to inefficient pulverization. A closed stainless-steel mortar that could be fully immersed in liquid N 2 was therefore designed and manufactured. It consists of a mortar, a pestle, and a sleeve that adhere closely together and are quick to assemble and clean (Fig. a). Following pulverization, bead-beating homogenization was investigated by using beads of 2.8 µm, the size recommended by the manufacturer to process hard tissues. By comparing the results obtained using steel (MK28-R) and ceramic (CK28-R) beads, it was found that the total protein concentration extracted was not significantly different ( p > 0.05; n = 3); steel beads were chosen for the study. Different homogenization conditions were tested by varying the number of cycles (3, 5, 7) and rotation speed (7500, 8500, 9500 rpm) with fixed cycle duration (30 s) and inter-cycle waiting time (40 s). A two-way analysis of variance with interaction estimated that, at a confidence level of 95%, only the two main factors (i.e., number of cycles and rotation speed) were statistically significant, whereas the interaction term was not significant. From the interaction plot (Fig. b) it was observed that at 5 cycles and 8500 rpm the protein concentration stabilized at about 68 µg protein/mg tissue, without significant differences at higher cycle numbers and rpm. Thus, these homogenization conditions were chosen for all subsequent experiments. The precision in terms of total protein concentration was calculated as intra-individual and inter-individual RSD%, obtaining values lower than 10% (n = 3) and 35% (n = 26), respectively. Mass spectrometry-based proteomic analysis of pooled samples After optimization of the extraction protocol, a high-throughput shotgun proteomic strategy based on a nanoLC-HRMS/MS approach was developed for the proteomic profiling of extracts from small fragments of cadaveric ecchymotic (E) and normal (N, control sample) skin. The developed analytical platform permitted the identification of about 2000 proteins for each sample; the number of identified proteins was comparable to that obtained by Bliss et al. by exploiting cryosectioning of skin tissues and 2D-LC separation . The achieved high proteome coverage is strictly related to the efficiency of protein extraction combined with the sensitivity performance provided by nanoflow-LC chromatographic separations. Evaluation of the precision of the protein extraction method for nanoLC-HRMS/MS-based proteomic analysis of skin specimens In order to evaluate the intermediate precision of the protein extraction protocol and nanoLC-HRMS/MS analysis, we compared the samples 1N, 2N, 1E, and 2E with the corresponding independent replicates 4N, 5N, 4E, and 5E, which were obtained from the same individuals, repeating the entire workflow procedure of protein extraction and nanoLC-HRMS/MS analysis one month later. The results clearly show that the protein extraction procedure has a good intermediate precision in terms of LFQ signal intensity and the number of identified proteins, which never differed by more than 2.5% and 6%, respectively (Table ). The Pearson correlation graphs (Supplementary Fig. ) and coefficient values (Table )—0.98 and 0.95 for the comparisons between the replicates of ecchymotic skin and 0.97 and 0.98 for normal skin—reflect the high intermediate precision of the extraction method. Proteome profiling of ecchymotic skin A shotgun LFQ proteomic approach was applied to investigate the proteome profile of ecchymotic and normal skin of cases. A Principal Component Analysis (PCA) was carried out by grouping quantitative data of proteins in the ecchymotic (1E, 2E, 3E, 4E, 5E) and normal groups (1N, 2N, 3N, 4N, 5N) (Supplementary Fig. ), suggesting a differential clustering of the two types of skin samples. The pairwise comparison between the proteomic analysis of ecchymotic tissue pools with the corresponding normal skin pools allowed us to identify the proteins in common and those that are present only in either one of the skin types. For each comparison a Student’s t-test (FDR ≤ 0.05) was carried out to identify proteins differentially present among the different conditions, as reported in the Venn diagrams shown in Fig. a–c. Once the analytical strategy was developed, sample pairs from 7 additional individuals with known wound age were analyzed. In particular, 5 individuals died within 1 h after the trauma (individuals 19–23, group G0, Table ) and two died within 1–12 h after the injury (individuals 24 and 25, group G1, Table ). It should be pointed out that individuals whose skin lesions predate their death of a known amount of time are rare in the forensic practice, thus limiting the number of samples at our disposal. The proteome profile of both ecchymotic and normal tissue of these samples was determined using the same workflow procedure applied for cases with unknown wound dating. Figure d,e shows the Venn diagrams of the comparison between ecchymotic and normal skin for these samples. The overall comparison between all ecchymotic samples (1E, 2E, 3E, 4E, 5E, G0E and G1E) in terms of proteins present only in the ecchymotic tissue respect to the corresponding adjacent normal skin led to the identification of a single protein, Glycophorin A (GYPA) common to all ecchymotic data sets (Fig. ). Glycophorins are a group of red blood cell (RBC) transmembrane proteins, described for the first time by Fairbanks et al. . GYPA is the predominant member of this family, and it is considered a high-sensitive forensic marker for bleeding and, therefore, for wound vitality based on immunochemical methods. As recently discussed by Vignali et al. , the most significant marker of wound vitality proved to be GYPA: it has been demonstrated that post-mortem alterations do not modify the reactivity of the GYPA, which is resistant to putrefaction for several weeks both in air and in water. Therefore, it can be extremely useful from a forensic point of view, to identify foci of vital hemorrhagic infiltration , , especially when there is no macroscopically evidence of such infiltration . Although there may be some doubt as to whether the presence of red blood cells around damaged blood vessels is a certain sign of the vitality of a wound, the first microscopic step towards such a diagnosis is usually the observation of the presence of the mechanical consequence of a lesion, i.e., red blood extravasation. It may therefore be necessary to confirm the antemortem nature of a lesion by looking for molecules in the inflammatory cascade—however, observation of extravasated red blood cells is still a preliminary step in such a diagnosis. The presence of GYPA in both ecchymotic and adjacent normal skin of pooled samples was verified by Western blotting (WB) analysis on samples 3N and 3E, confirming that Glycophorin A is exclusively present in ecchymotic skin (Fig. ), since GYPA gives an undetectable band with intensity at noise level in normal skin. This finding suggests that this protein can be used as a MS-detectable biomarker of wound vitality. Identification of proteins in ecchymotic skin from cases with known wound dating To assess the contribution of wound age on the skin proteome, proteins that were exclusively present in the ecchymotic skin of the G0 or G1 groups and not in the corresponding normal skin samples, or that showed a significant t-test difference (Student's t-test FDR ≤ 0.05), were compared. This comparison allowed us to identify 90 and 130 proteins exclusively expressed in G0E and G1E, respectively (Fig. f and Supplementary Table ). A PANTHER (Protein ANalysis THrough Evolutionary Relationships) analysis on these proteins revealed that they can be clustered in several pathways (Table ). Proteins attributed to the inflammation chemokine and cytokine signaling pathway were present both in G0 and G1 samples, suggesting that their expression starts immediately after trauma and continues for at least 12 h. Cytokines involved in the healing process are a myriad and guide all the phases of wound healing, generally divided into coagulation, inflammation (with the removal of dead tissues), re-epithelialization, granulation tissue formation, angiogenesis, and scar formation. Proteins involved in cytoskeletal regulation by Rho GTPase, angiogenesis and CCKR signalling were found only in wounds that predated death by at least 1 h, suggesting that their expression does not occur immediately after the lesion. During wound repair, Rho GTPases are known to coordinate cytoskeletal response and repair mechanisms , of which angiogenesis is part . Despite the limited number of samples, we therefore suggest that these MS-detectable proteins are potential biomarkers for wound dating. The study initially aimed to develop a high-throughput, quick and simple procedure for extracting proteins from aliquots of skin tissue to be identified by MS/MS. The coriaceous nature of full-thickness skin, which includes subcutaneous fat, dermis, and epidermis, makes protein extraction particularly challenging due to high lipid content, insolubility, and extensive protein cross-linking . A first attempt to pulverize a small amount (about 100 mg) of skin samples frozen in liquid N 2 with conventional mortar and pestle failed, as it led to inefficient pulverization. A closed stainless-steel mortar that could be fully immersed in liquid N 2 was therefore designed and manufactured. It consists of a mortar, a pestle, and a sleeve that adhere closely together and are quick to assemble and clean (Fig. a). Following pulverization, bead-beating homogenization was investigated by using beads of 2.8 µm, the size recommended by the manufacturer to process hard tissues. By comparing the results obtained using steel (MK28-R) and ceramic (CK28-R) beads, it was found that the total protein concentration extracted was not significantly different ( p > 0.05; n = 3); steel beads were chosen for the study. Different homogenization conditions were tested by varying the number of cycles (3, 5, 7) and rotation speed (7500, 8500, 9500 rpm) with fixed cycle duration (30 s) and inter-cycle waiting time (40 s). A two-way analysis of variance with interaction estimated that, at a confidence level of 95%, only the two main factors (i.e., number of cycles and rotation speed) were statistically significant, whereas the interaction term was not significant. From the interaction plot (Fig. b) it was observed that at 5 cycles and 8500 rpm the protein concentration stabilized at about 68 µg protein/mg tissue, without significant differences at higher cycle numbers and rpm. Thus, these homogenization conditions were chosen for all subsequent experiments. The precision in terms of total protein concentration was calculated as intra-individual and inter-individual RSD%, obtaining values lower than 10% (n = 3) and 35% (n = 26), respectively. After optimization of the extraction protocol, a high-throughput shotgun proteomic strategy based on a nanoLC-HRMS/MS approach was developed for the proteomic profiling of extracts from small fragments of cadaveric ecchymotic (E) and normal (N, control sample) skin. The developed analytical platform permitted the identification of about 2000 proteins for each sample; the number of identified proteins was comparable to that obtained by Bliss et al. by exploiting cryosectioning of skin tissues and 2D-LC separation . The achieved high proteome coverage is strictly related to the efficiency of protein extraction combined with the sensitivity performance provided by nanoflow-LC chromatographic separations. In order to evaluate the intermediate precision of the protein extraction protocol and nanoLC-HRMS/MS analysis, we compared the samples 1N, 2N, 1E, and 2E with the corresponding independent replicates 4N, 5N, 4E, and 5E, which were obtained from the same individuals, repeating the entire workflow procedure of protein extraction and nanoLC-HRMS/MS analysis one month later. The results clearly show that the protein extraction procedure has a good intermediate precision in terms of LFQ signal intensity and the number of identified proteins, which never differed by more than 2.5% and 6%, respectively (Table ). The Pearson correlation graphs (Supplementary Fig. ) and coefficient values (Table )—0.98 and 0.95 for the comparisons between the replicates of ecchymotic skin and 0.97 and 0.98 for normal skin—reflect the high intermediate precision of the extraction method. A shotgun LFQ proteomic approach was applied to investigate the proteome profile of ecchymotic and normal skin of cases. A Principal Component Analysis (PCA) was carried out by grouping quantitative data of proteins in the ecchymotic (1E, 2E, 3E, 4E, 5E) and normal groups (1N, 2N, 3N, 4N, 5N) (Supplementary Fig. ), suggesting a differential clustering of the two types of skin samples. The pairwise comparison between the proteomic analysis of ecchymotic tissue pools with the corresponding normal skin pools allowed us to identify the proteins in common and those that are present only in either one of the skin types. For each comparison a Student’s t-test (FDR ≤ 0.05) was carried out to identify proteins differentially present among the different conditions, as reported in the Venn diagrams shown in Fig. a–c. Once the analytical strategy was developed, sample pairs from 7 additional individuals with known wound age were analyzed. In particular, 5 individuals died within 1 h after the trauma (individuals 19–23, group G0, Table ) and two died within 1–12 h after the injury (individuals 24 and 25, group G1, Table ). It should be pointed out that individuals whose skin lesions predate their death of a known amount of time are rare in the forensic practice, thus limiting the number of samples at our disposal. The proteome profile of both ecchymotic and normal tissue of these samples was determined using the same workflow procedure applied for cases with unknown wound dating. Figure d,e shows the Venn diagrams of the comparison between ecchymotic and normal skin for these samples. The overall comparison between all ecchymotic samples (1E, 2E, 3E, 4E, 5E, G0E and G1E) in terms of proteins present only in the ecchymotic tissue respect to the corresponding adjacent normal skin led to the identification of a single protein, Glycophorin A (GYPA) common to all ecchymotic data sets (Fig. ). Glycophorins are a group of red blood cell (RBC) transmembrane proteins, described for the first time by Fairbanks et al. . GYPA is the predominant member of this family, and it is considered a high-sensitive forensic marker for bleeding and, therefore, for wound vitality based on immunochemical methods. As recently discussed by Vignali et al. , the most significant marker of wound vitality proved to be GYPA: it has been demonstrated that post-mortem alterations do not modify the reactivity of the GYPA, which is resistant to putrefaction for several weeks both in air and in water. Therefore, it can be extremely useful from a forensic point of view, to identify foci of vital hemorrhagic infiltration , , especially when there is no macroscopically evidence of such infiltration . Although there may be some doubt as to whether the presence of red blood cells around damaged blood vessels is a certain sign of the vitality of a wound, the first microscopic step towards such a diagnosis is usually the observation of the presence of the mechanical consequence of a lesion, i.e., red blood extravasation. It may therefore be necessary to confirm the antemortem nature of a lesion by looking for molecules in the inflammatory cascade—however, observation of extravasated red blood cells is still a preliminary step in such a diagnosis. The presence of GYPA in both ecchymotic and adjacent normal skin of pooled samples was verified by Western blotting (WB) analysis on samples 3N and 3E, confirming that Glycophorin A is exclusively present in ecchymotic skin (Fig. ), since GYPA gives an undetectable band with intensity at noise level in normal skin. This finding suggests that this protein can be used as a MS-detectable biomarker of wound vitality. To assess the contribution of wound age on the skin proteome, proteins that were exclusively present in the ecchymotic skin of the G0 or G1 groups and not in the corresponding normal skin samples, or that showed a significant t-test difference (Student's t-test FDR ≤ 0.05), were compared. This comparison allowed us to identify 90 and 130 proteins exclusively expressed in G0E and G1E, respectively (Fig. f and Supplementary Table ). A PANTHER (Protein ANalysis THrough Evolutionary Relationships) analysis on these proteins revealed that they can be clustered in several pathways (Table ). Proteins attributed to the inflammation chemokine and cytokine signaling pathway were present both in G0 and G1 samples, suggesting that their expression starts immediately after trauma and continues for at least 12 h. Cytokines involved in the healing process are a myriad and guide all the phases of wound healing, generally divided into coagulation, inflammation (with the removal of dead tissues), re-epithelialization, granulation tissue formation, angiogenesis, and scar formation. Proteins involved in cytoskeletal regulation by Rho GTPase, angiogenesis and CCKR signalling were found only in wounds that predated death by at least 1 h, suggesting that their expression does not occur immediately after the lesion. During wound repair, Rho GTPases are known to coordinate cytoskeletal response and repair mechanisms , of which angiogenesis is part . Despite the limited number of samples, we therefore suggest that these MS-detectable proteins are potential biomarkers for wound dating. Determining the vitality of a wound is a major challenge in forensic pathology, especially in cases of decomposed bodies. In general, several markers of wound vitality have been investigated, and progress has been made in wound-age estimation in the last few years, though results were scarcely reproducible. In the present study, an analytical workflow for highly precise proteomic analysis of full-thickness skin by nanoLC-HRMS was developed. The only protein uniquely identified in all ecchymotic samples was GYPA, which was validated by Western blot analysis. This finding is in accordance with literature studies based on immunochemical assays, thus strengthening the evidence that GYPA can be considered a MS-detectable marker of vital ecchymosis. As reported in the literature, GYPA appears to be resistant over time after death, making it a useful marker in corpses with advanced putrefaction phenomena. The application of the analytical protocol to ecchymotic skin samples of known age relative to death permitted to identify other proteins differentially expressed in ecchymotic samples, although further analysis on larger datasets will be required for their validation. The present study confirmed that mass spectrometry-based proteomics is a valuable tool for reaching conclusions in forensic death investigations; the devised sample treatment protocol could be the basis for the development of a target LC-MS/MS strategy for biomarker determination in skin tissues. Supplementary Information 1. Supplementary Information 2.
|
The role of integrated psychological support in breast cancer patients: a randomized monocentric prospective study evaluating the Fil-Rouge Integrated Psycho-Oncological Support (FRIPOS) program
|
d7513ff0-21f4-4f41-b064-a9f9d4d620f7
|
10104919
|
Internal Medicine[mh]
|
Breast cancer is the most commonly diagnosed cancer in Italian women . In 2021, 834,200 women in Italy were living with a breast cancer diagnosis. Although the mortality rate has decreased thanks to early diagnosis and medical advances in the care of women with this disease, and the 5-year survival rate is 87% , it is necessary to deepen the topic of psychological adaptation to the disease, which inevitably brings changes in the different areas of a woman’s life . The psycho-oncological support Psycho-oncology is a branch of the oncology disciplines that is particularly concerned with two psychological dimensions: the psychological responses of patients and their families to all phases of disease and staff stress and the psychological, social, and behavioral factors that influence cancer onset and disease survival . Psychological support for cancer patients is now recognized as a fundamental aspect of treatment pathways , but interventions are often highly targeted and delivered separately from the medical context; examples include cognitive behavioral therapy , mindfulness and relaxation techniques , psychoeducation , and family and couples therapy . However, there are few operational models describing how psycho-oncological support interacts with medical staff in acute care, especially in the Italian context , and furthermore, there are still many barriers that prevent cancer patients from seeking support. Barriers to seeking psychological support in cancer patients The communication gap between the patient and the medical staff is one of the main problems . In addition, according to the literature, cancer patients often have several unmet needs, with psychological and information problems being the most common . Finally, many patients are not fully aware of their emotional, cognitive, and psychological vulnerability. This may be a temporary condition due to the traumatic effects of the disease, which can lead to a pathological state of numbness , or in other cases due to their previous cognitive or emotional functioning . Other common problems are that patients are not adequately informed about support services , that they may be perceived as stigmatizing , and that medical staff may not be adequately informed about the role of psychosocial care . Psycho-oncology as an integrated support in routine multidisciplinary cancer care Whereas in the past, support was usually provided only at the patient’s request, today, the modality has evolved to the so-called tiered models (or stepped models) based on monitoring of suffering . For example, in the UK, the National for Health and Care Excellence (NICE) has developed a psychological intervention pathway with a four-stage paradigm that targets psychological problems or needs through screening and provides treatment as needed. This approach remains the most widely used and proven, including cost-effectiveness , but it has several limitations. First, because the causes of suffering are complex, focusing exclusively on psychological problems risks overlooking the patient’s other perceived needs . In addition, research has shown that most patients who have a high distress screening score with a specific assessment tool do not want to be referred to psychological counseling, whereas patients with a low distress score often hope for some form of support . Finally, some research suggests that an accurate diagnosis of distress is not necessarily linearly related to the ability of health care professionals to control and effectively treat the symptoms that are the cause of distress . An organized method of care that ensures that all of the patient’s needs are met in a coherent and seamless manner is referred to as an “integrated system of care.” Proposals have emerged around the world in the last decade, and although development is ongoing, studies appear promising. These procedures are integrated into clinical routines and address all patients, at least in the initial phase .
Psycho-oncology is a branch of the oncology disciplines that is particularly concerned with two psychological dimensions: the psychological responses of patients and their families to all phases of disease and staff stress and the psychological, social, and behavioral factors that influence cancer onset and disease survival . Psychological support for cancer patients is now recognized as a fundamental aspect of treatment pathways , but interventions are often highly targeted and delivered separately from the medical context; examples include cognitive behavioral therapy , mindfulness and relaxation techniques , psychoeducation , and family and couples therapy . However, there are few operational models describing how psycho-oncological support interacts with medical staff in acute care, especially in the Italian context , and furthermore, there are still many barriers that prevent cancer patients from seeking support.
The communication gap between the patient and the medical staff is one of the main problems . In addition, according to the literature, cancer patients often have several unmet needs, with psychological and information problems being the most common . Finally, many patients are not fully aware of their emotional, cognitive, and psychological vulnerability. This may be a temporary condition due to the traumatic effects of the disease, which can lead to a pathological state of numbness , or in other cases due to their previous cognitive or emotional functioning . Other common problems are that patients are not adequately informed about support services , that they may be perceived as stigmatizing , and that medical staff may not be adequately informed about the role of psychosocial care .
Whereas in the past, support was usually provided only at the patient’s request, today, the modality has evolved to the so-called tiered models (or stepped models) based on monitoring of suffering . For example, in the UK, the National for Health and Care Excellence (NICE) has developed a psychological intervention pathway with a four-stage paradigm that targets psychological problems or needs through screening and provides treatment as needed. This approach remains the most widely used and proven, including cost-effectiveness , but it has several limitations. First, because the causes of suffering are complex, focusing exclusively on psychological problems risks overlooking the patient’s other perceived needs . In addition, research has shown that most patients who have a high distress screening score with a specific assessment tool do not want to be referred to psychological counseling, whereas patients with a low distress score often hope for some form of support . Finally, some research suggests that an accurate diagnosis of distress is not necessarily linearly related to the ability of health care professionals to control and effectively treat the symptoms that are the cause of distress . An organized method of care that ensures that all of the patient’s needs are met in a coherent and seamless manner is referred to as an “integrated system of care.” Proposals have emerged around the world in the last decade, and although development is ongoing, studies appear promising. These procedures are integrated into clinical routines and address all patients, at least in the initial phase .
The Fil-Rouge Integrated Psycho-Oncological Support (FRIPOS) project is in line with the agreement of April 17, 2019, between the Italian State and its Regions, and the European Cancer Plan presented in February 2021 , even if it was designed following the Italian National Cancer Plan of 2016. In it, an integrated supportive intervention based on a close synergy between psycho-oncologists, medical, and nursing staff is proposed . The aim of this study was to evaluate the impact of the FRIPOS model compared with routine care in a sample of women with breast cancer. Clinical steps and research procedure As shown in Table , which illustrates the clinical and research steps during the project, the psycho-oncologist was present in the FRIPOS group at various times during cancer treatment. The overall clinical goals of the intervention were identified by consulting the clinical scientific literature and can be summarized as follows: (A) encouraging the patient to adapt to the new condition by helping her cope with the physical, psychological, social, and relational changes caused by the disease ; (B) deepening the problem areas by identifying the patient’s needs in order to provide integrated and personalized therapeutic interventions ; (C) facilitating emotional expression by encouraging the patient to recognize and control anxious and/or depressive states ; (D) paying attention to body image to facilitate the process of accepting therapies ; (E) communicating the patient’s needs to the treatment team to jointly develop a rehabilitation project for each individual patient . We hypothesized that women who received integrated support would have lower scores for psychopathological symptoms, better psychological and emotional functioning, and better quality of life parameters than the group that received treatment as usual (TAU). Ethics and research design The intervention and the filling in of the questionnaires took place during the hospitalization in the surgical department and during the visit to the oncological day hospital and/or to the “Alte Energie” Center of the Oncological Radiotherapy Department of the Clinical Institute of S. Anna in Brescia — San Donato Group. The project was approved by the Ethics Committee of the Clinical Institute of S. Anna in Brescia (Prot. Number: 2016.1.1; June 6, 2016). The questionnaires were labeled with an alphanumeric code, the combination of which was known only to the nursing staff responsible for randomization. The medical staff and the researchers who performed the data analysis were blinded to this information. The research design was a randomized, prospective study with triple data collection according to the steps of the clinical intervention: preoperative phase (T0), initial phase of treatments (T1), and 3 months after the start of treatments (T2) (Table ). In the pre-study phase, nurses informed patients of the opportunity to participate in the research project, obtained informed consent, and performed randomization using the online software Research Randomizer (version 4.0). The Symptom Checklist-90-R (SCL-90-R) was used only at T0 and T2 because it was considered a specific and primary tool for evaluating the FRIPOS program by assessing symptom indices at baseline and at the end of the project. At T1 and T2, patients completed two quality of life (QLQ) questionnaires developed by the European Organization for Research and Treatment of Cancer (EORTC), namely C-30 and BR-23. Measures The instruments used are all standardized and validated in the Italian context. Symptom Checklist-90-R The SCL-90-R is a self-administered questionnaire for the assessment of psychological stress and psychopathological symptoms. It consists of 90 items measuring both a Global Severity Index, a global indicator of the intensity of psychological distress complained of by the respondent, and nine primary symptom dimensions: somatization (SOM), obsessions (O-C), interpersonal sensitivity (I-S), depression (DEP), anxiety (ANX), hostility (HOS), phobic anxiety (PHOB), paranoid ideation (PAR), and psychoticism (PSY). Subjects’ responses ranged from 0 (not at all) to 4 (very strongly). Cronbach’s α was measured on item scores at the two induction time points, with α at T0 = 0.96 and α at T2 = 0.98. EORTC QLQ-C30 and EORTC QLQ-BR23 The EORTC QLQ-C30 is an instrument for measuring quality of life in cancer patients. In addition to the general quality of life (QoL) assessment, this questionnaire includes the following: five function scales — physical (PF), role (RF), cognitive (CF), emotional (EF), and social (SF) functions; three symptom-related scales — fatigue (FA), nausea and vomiting (NV), and pain (PA); and six individual items on loss of appetite (AP), dyspnea (DY), sleep disturbances (SD), constipation (CO), diarrhea (DI), and financial difficulties (FD). The test includes 30 items with a response range of 1 (none) to 4 (very severe). The QLQ-BR23 is a questionnaire designed to assess the specific problems of breast cancer patients. This particular module includes 23 questions assessing body image (BRBI), sexual functioning (BRSEF), sexual experience (BRSEE), future prospects (BRFU), side effects of systemic therapy (BRST), breast symptoms (BRBS), arm symptoms (BRAS), and hair loss (BRHL). Responses ranged from 1 (none) to 4 (very severe). Cronbach’s α of the QLQ-C30 at T1 = 0.80 and T2 = 0.85, whereas the α of the QLQ-BR23 at T1 = 0.71 and T2 = 0.78. Statistical analyses Statistical analyses were performed using Statistical Package for the Social Sciences software (SPSS, version 26). A total of 124 subjects (69.67%) answered the entire questionnaire without omissions. Subscales with two or more missing values were not included in the calculation. This procedure resulted in 177 complete questionnaires (98.2%). To compare the two groups (FRIPOS group and TAU group) analyzed at the same time of administration, multiple independent-samples t tests were performed to determine whether there were differences in the subscales of the three instruments at T0 (for SCL-90-R) or T1 (for QLQ-C30 and QLQ-BR23) compared with T2. Then, a series of paired-samples t tests were performed to determine whether there was a statistically significant difference between T0 (for SCL-90-R) or T1 (for QLQ-C30 and QLQ-BR23) and T2, in the TAU and FRIPOS groups separately. Finally, a series of 10 multiple regressions were performed after controlling for sociodemographic variables (age, romantic relationship, and presence of sons or daughters) to predict each SCL-90-R subscale at T2 from the SCL-90-R score at T0 and the quality-of-life measure at T2, using the QLQ-C30 scores at T2. We used multiple regression analyses to determine the relative contribution of each predictor (membership in the FRIPOS group vs membership in the TAU group and QoL index) to the total variance explained. Sample Women who had been diagnosed with operable breast cancer were included in the study (ICD-10-CM Diagnosis Code C50.919) [110]. The exclusion criteria for the study were (a) cognitive impairment and/or psychiatric comorbidity and/or a physical condition related to the disease that, in the opinion of the treating physicians or the administrator, could lead to invalid data when completing the questionnaires; (b) poor knowledge of the Italian language; and (c) an assumed life expectancy of less than 6 months at the time of initial diagnosis. A total of 270 women were recruited for this study, of whom 182 gave informed consent to participate and provided data 3 times (participation rate 60%): 103 belong to the FRIPOS group and 79 to the control group (TAU). The mean age of the sample is 57.88 years (SD = 11.55) and ranges from 25 to 87 years. Table shows the sociodemographic variables of the women who participated in the study.
As shown in Table , which illustrates the clinical and research steps during the project, the psycho-oncologist was present in the FRIPOS group at various times during cancer treatment. The overall clinical goals of the intervention were identified by consulting the clinical scientific literature and can be summarized as follows: (A) encouraging the patient to adapt to the new condition by helping her cope with the physical, psychological, social, and relational changes caused by the disease ; (B) deepening the problem areas by identifying the patient’s needs in order to provide integrated and personalized therapeutic interventions ; (C) facilitating emotional expression by encouraging the patient to recognize and control anxious and/or depressive states ; (D) paying attention to body image to facilitate the process of accepting therapies ; (E) communicating the patient’s needs to the treatment team to jointly develop a rehabilitation project for each individual patient . We hypothesized that women who received integrated support would have lower scores for psychopathological symptoms, better psychological and emotional functioning, and better quality of life parameters than the group that received treatment as usual (TAU).
The intervention and the filling in of the questionnaires took place during the hospitalization in the surgical department and during the visit to the oncological day hospital and/or to the “Alte Energie” Center of the Oncological Radiotherapy Department of the Clinical Institute of S. Anna in Brescia — San Donato Group. The project was approved by the Ethics Committee of the Clinical Institute of S. Anna in Brescia (Prot. Number: 2016.1.1; June 6, 2016). The questionnaires were labeled with an alphanumeric code, the combination of which was known only to the nursing staff responsible for randomization. The medical staff and the researchers who performed the data analysis were blinded to this information. The research design was a randomized, prospective study with triple data collection according to the steps of the clinical intervention: preoperative phase (T0), initial phase of treatments (T1), and 3 months after the start of treatments (T2) (Table ). In the pre-study phase, nurses informed patients of the opportunity to participate in the research project, obtained informed consent, and performed randomization using the online software Research Randomizer (version 4.0). The Symptom Checklist-90-R (SCL-90-R) was used only at T0 and T2 because it was considered a specific and primary tool for evaluating the FRIPOS program by assessing symptom indices at baseline and at the end of the project. At T1 and T2, patients completed two quality of life (QLQ) questionnaires developed by the European Organization for Research and Treatment of Cancer (EORTC), namely C-30 and BR-23.
The instruments used are all standardized and validated in the Italian context. Symptom Checklist-90-R The SCL-90-R is a self-administered questionnaire for the assessment of psychological stress and psychopathological symptoms. It consists of 90 items measuring both a Global Severity Index, a global indicator of the intensity of psychological distress complained of by the respondent, and nine primary symptom dimensions: somatization (SOM), obsessions (O-C), interpersonal sensitivity (I-S), depression (DEP), anxiety (ANX), hostility (HOS), phobic anxiety (PHOB), paranoid ideation (PAR), and psychoticism (PSY). Subjects’ responses ranged from 0 (not at all) to 4 (very strongly). Cronbach’s α was measured on item scores at the two induction time points, with α at T0 = 0.96 and α at T2 = 0.98. EORTC QLQ-C30 and EORTC QLQ-BR23 The EORTC QLQ-C30 is an instrument for measuring quality of life in cancer patients. In addition to the general quality of life (QoL) assessment, this questionnaire includes the following: five function scales — physical (PF), role (RF), cognitive (CF), emotional (EF), and social (SF) functions; three symptom-related scales — fatigue (FA), nausea and vomiting (NV), and pain (PA); and six individual items on loss of appetite (AP), dyspnea (DY), sleep disturbances (SD), constipation (CO), diarrhea (DI), and financial difficulties (FD). The test includes 30 items with a response range of 1 (none) to 4 (very severe). The QLQ-BR23 is a questionnaire designed to assess the specific problems of breast cancer patients. This particular module includes 23 questions assessing body image (BRBI), sexual functioning (BRSEF), sexual experience (BRSEE), future prospects (BRFU), side effects of systemic therapy (BRST), breast symptoms (BRBS), arm symptoms (BRAS), and hair loss (BRHL). Responses ranged from 1 (none) to 4 (very severe). Cronbach’s α of the QLQ-C30 at T1 = 0.80 and T2 = 0.85, whereas the α of the QLQ-BR23 at T1 = 0.71 and T2 = 0.78.
The SCL-90-R is a self-administered questionnaire for the assessment of psychological stress and psychopathological symptoms. It consists of 90 items measuring both a Global Severity Index, a global indicator of the intensity of psychological distress complained of by the respondent, and nine primary symptom dimensions: somatization (SOM), obsessions (O-C), interpersonal sensitivity (I-S), depression (DEP), anxiety (ANX), hostility (HOS), phobic anxiety (PHOB), paranoid ideation (PAR), and psychoticism (PSY). Subjects’ responses ranged from 0 (not at all) to 4 (very strongly). Cronbach’s α was measured on item scores at the two induction time points, with α at T0 = 0.96 and α at T2 = 0.98.
The EORTC QLQ-C30 is an instrument for measuring quality of life in cancer patients. In addition to the general quality of life (QoL) assessment, this questionnaire includes the following: five function scales — physical (PF), role (RF), cognitive (CF), emotional (EF), and social (SF) functions; three symptom-related scales — fatigue (FA), nausea and vomiting (NV), and pain (PA); and six individual items on loss of appetite (AP), dyspnea (DY), sleep disturbances (SD), constipation (CO), diarrhea (DI), and financial difficulties (FD). The test includes 30 items with a response range of 1 (none) to 4 (very severe). The QLQ-BR23 is a questionnaire designed to assess the specific problems of breast cancer patients. This particular module includes 23 questions assessing body image (BRBI), sexual functioning (BRSEF), sexual experience (BRSEE), future prospects (BRFU), side effects of systemic therapy (BRST), breast symptoms (BRBS), arm symptoms (BRAS), and hair loss (BRHL). Responses ranged from 1 (none) to 4 (very severe). Cronbach’s α of the QLQ-C30 at T1 = 0.80 and T2 = 0.85, whereas the α of the QLQ-BR23 at T1 = 0.71 and T2 = 0.78.
Statistical analyses were performed using Statistical Package for the Social Sciences software (SPSS, version 26). A total of 124 subjects (69.67%) answered the entire questionnaire without omissions. Subscales with two or more missing values were not included in the calculation. This procedure resulted in 177 complete questionnaires (98.2%). To compare the two groups (FRIPOS group and TAU group) analyzed at the same time of administration, multiple independent-samples t tests were performed to determine whether there were differences in the subscales of the three instruments at T0 (for SCL-90-R) or T1 (for QLQ-C30 and QLQ-BR23) compared with T2. Then, a series of paired-samples t tests were performed to determine whether there was a statistically significant difference between T0 (for SCL-90-R) or T1 (for QLQ-C30 and QLQ-BR23) and T2, in the TAU and FRIPOS groups separately. Finally, a series of 10 multiple regressions were performed after controlling for sociodemographic variables (age, romantic relationship, and presence of sons or daughters) to predict each SCL-90-R subscale at T2 from the SCL-90-R score at T0 and the quality-of-life measure at T2, using the QLQ-C30 scores at T2. We used multiple regression analyses to determine the relative contribution of each predictor (membership in the FRIPOS group vs membership in the TAU group and QoL index) to the total variance explained.
Women who had been diagnosed with operable breast cancer were included in the study (ICD-10-CM Diagnosis Code C50.919) [110]. The exclusion criteria for the study were (a) cognitive impairment and/or psychiatric comorbidity and/or a physical condition related to the disease that, in the opinion of the treating physicians or the administrator, could lead to invalid data when completing the questionnaires; (b) poor knowledge of the Italian language; and (c) an assumed life expectancy of less than 6 months at the time of initial diagnosis. A total of 270 women were recruited for this study, of whom 182 gave informed consent to participate and provided data 3 times (participation rate 60%): 103 belong to the FRIPOS group and 79 to the control group (TAU). The mean age of the sample is 57.88 years (SD = 11.55) and ranges from 25 to 87 years. Table shows the sociodemographic variables of the women who participated in the study.
SCL-90-R Analysis of the results at T0 in terms of psychopathological symptoms did not reveal a significant difference between the two groups (FRIPOS and TAU) in any of the SCL-90-R scales. However, at T2, there was a statistically significant difference between the two groups in all scales of the SCL-90-R. Specifically, in the FRIPOS group, there were improvements in the following: SOM (0.06, 95% CI, 0.01 to 0.12); O-C (0.20, 95% CI, 0.11 to 0.28); DEP (0.18, 95% CI, 0.04 to 0.17); ANX (0.33, 95% CI, 0.24 to 0.43); HOS (0.10, 95% CI, 0.05 to 0.16); PHOB (0.36, 95% CI, 0.01 TO 0.07); PAR (0.10, 95% CI, 0.04 to 0.16); PSY (0.19, 95% CI, 0.13 to 0.25); and on the GSI of 0.16 (95% CI, 0.11 to 0.21). Using the repeated-measures t test, a mean significant worsening on the I-S scale of 0.21 (95% CI, − 0.32 to − 0.09) was found when comparing T0 and T2 in the TAU group (Table ). EORTC QLQ-C30 Analysis of the data collected with the QLQ-C30, shown in Table , indicates that there were no statistically significant differences between the FRIPOS group and the TAU group at T1, making the two subgroups comparable. An independent comparison of the two groups at T2 shows that the FRIPOS group performed better on both the QoL scale and the EF scale. Longitudinal comparison (by paired t test analyses) between the groups shows that participation in the FRIPOS project resulted in an improvement on the EF scale of 5.53 (95% CI, − 9.62 to − 1.43), whereas in the TAU group, there was a worsening of 3.45 (95% CI, 0.32 to 7.21), a worsening on the QoL scale of 5.06 (95% CI, 1.40 to 8.73), and on the PF subscale of 0.4.14 (95% CI, 0.16 to 6.63). In addition, there was evidence of treatment benefit in some of the subscales related to physical symptoms: Patients in the FRIPOS group showed better scores on the indices FA, DY, and SL. There was also a significant worsening of 0.295 (95% CI, − 6.58 to − 0.67) in the DI subscale. EORTC QLQ-BR23 For the QLQ-BR23 scales specifically related to breast cancer listed in Table , patients in the TAU group worsened on both the BRBI scale (8.12 (95% CI, 3.69 to 12.55)) and the BRFU scale (8.02 (95% CI, 2.51 to 13.22)), which was not the case for patients who participated in the FRIPOS project group. Here, there was no significant worsening of the scores and even an improvement in the BRFU score of 9.57 (95% CI, − 15.55 to − 3.58). For the symptom scales, inclusion of patients in the FRIPOS group resulted in significant stability in all subscales, whereas there was significant deterioration only in the TAU group, in the BRST scale of 8.02 (95% CI, − 11.95 to − 4.07), the BRBS scale of 5.10 (95% CI, − 8.61 to − 1.58), and the BRAS scale of 5.63 (95% CI, − 10.02 to − 1.23). The role of integrated support and quality of life in explaining symptomatology at T2 A series of multiple regressions were performed to predict each subscale of the SCL-90-R at T2 from each subscale of the SCL-90-R at T0 and the QLQ-C30-derived QL scores at T2. In all regression analyses, linearity was assessed by partial regression plots and a plot of student residuals against predicted scores. Independence of the residuals was demonstrated by a Durbin-Watson statistic with values of 1.476 (hostility subscale) and 2.134 (somatization subscale). Homoscedasticity was determined by visual inspection of a plot of student residuals compared with unstandardized predicted values. There was no evidence of multicollinearity, as judged by tolerance values greater than 0.1. The multiple regression model for somatization statistically significantly predicted somatization at F (6, 170) = 23.545, p < 0.001, adj. R 2 = 0.44. The same was true for the following subscales: O-C, with F (6, 170) = 24.000, p < 0.001, adj. R 2 = 0.44; I-S, with F (6, 170) = 9.044, p < 0.001, adj. R 2 = 0.22; DEP, with F (6, 170) = 23.496, p < 0.001, adj. R 2 = 0.43; ANX, with F (6, 170) = 22.541, p < 0.001, adj. R 2 = 0.42; HOS, with F(6, 170) = 8.819, p < 0.001, adj. R 2 = 0.21; PHOB, with F (6, 170) = 9.672, p < 0.001, adj. R 2 = 0.23; PAR, with F (6, 170) = 13.517, p < 0.001, adj. R 2 = 0.30; PSY, with F (6, 170) = 17.806, p < 0.001, adj. R 2 = 0.36. In addition, the multiple regression model for the Global Severity Index statistically significantly predicted the value of this index at T2, F (6, 170) = 24.503, p < 0.001, adj. R 2 = 0.45. In nine of ten regression models, both FRIPOS group membership and the QoL subscale significantly contributed to prediction, with a significance level of at least p < 0.05. Somatization symptomatology was the only subscale for which FRIPOS group membership did not make a statistically significant contribution to prediction, whereas the QoL subscale did. Regression coefficients and standard errors are provided in Table .
Analysis of the results at T0 in terms of psychopathological symptoms did not reveal a significant difference between the two groups (FRIPOS and TAU) in any of the SCL-90-R scales. However, at T2, there was a statistically significant difference between the two groups in all scales of the SCL-90-R. Specifically, in the FRIPOS group, there were improvements in the following: SOM (0.06, 95% CI, 0.01 to 0.12); O-C (0.20, 95% CI, 0.11 to 0.28); DEP (0.18, 95% CI, 0.04 to 0.17); ANX (0.33, 95% CI, 0.24 to 0.43); HOS (0.10, 95% CI, 0.05 to 0.16); PHOB (0.36, 95% CI, 0.01 TO 0.07); PAR (0.10, 95% CI, 0.04 to 0.16); PSY (0.19, 95% CI, 0.13 to 0.25); and on the GSI of 0.16 (95% CI, 0.11 to 0.21). Using the repeated-measures t test, a mean significant worsening on the I-S scale of 0.21 (95% CI, − 0.32 to − 0.09) was found when comparing T0 and T2 in the TAU group (Table ).
Analysis of the data collected with the QLQ-C30, shown in Table , indicates that there were no statistically significant differences between the FRIPOS group and the TAU group at T1, making the two subgroups comparable. An independent comparison of the two groups at T2 shows that the FRIPOS group performed better on both the QoL scale and the EF scale. Longitudinal comparison (by paired t test analyses) between the groups shows that participation in the FRIPOS project resulted in an improvement on the EF scale of 5.53 (95% CI, − 9.62 to − 1.43), whereas in the TAU group, there was a worsening of 3.45 (95% CI, 0.32 to 7.21), a worsening on the QoL scale of 5.06 (95% CI, 1.40 to 8.73), and on the PF subscale of 0.4.14 (95% CI, 0.16 to 6.63). In addition, there was evidence of treatment benefit in some of the subscales related to physical symptoms: Patients in the FRIPOS group showed better scores on the indices FA, DY, and SL. There was also a significant worsening of 0.295 (95% CI, − 6.58 to − 0.67) in the DI subscale.
For the QLQ-BR23 scales specifically related to breast cancer listed in Table , patients in the TAU group worsened on both the BRBI scale (8.12 (95% CI, 3.69 to 12.55)) and the BRFU scale (8.02 (95% CI, 2.51 to 13.22)), which was not the case for patients who participated in the FRIPOS project group. Here, there was no significant worsening of the scores and even an improvement in the BRFU score of 9.57 (95% CI, − 15.55 to − 3.58). For the symptom scales, inclusion of patients in the FRIPOS group resulted in significant stability in all subscales, whereas there was significant deterioration only in the TAU group, in the BRST scale of 8.02 (95% CI, − 11.95 to − 4.07), the BRBS scale of 5.10 (95% CI, − 8.61 to − 1.58), and the BRAS scale of 5.63 (95% CI, − 10.02 to − 1.23).
A series of multiple regressions were performed to predict each subscale of the SCL-90-R at T2 from each subscale of the SCL-90-R at T0 and the QLQ-C30-derived QL scores at T2. In all regression analyses, linearity was assessed by partial regression plots and a plot of student residuals against predicted scores. Independence of the residuals was demonstrated by a Durbin-Watson statistic with values of 1.476 (hostility subscale) and 2.134 (somatization subscale). Homoscedasticity was determined by visual inspection of a plot of student residuals compared with unstandardized predicted values. There was no evidence of multicollinearity, as judged by tolerance values greater than 0.1. The multiple regression model for somatization statistically significantly predicted somatization at F (6, 170) = 23.545, p < 0.001, adj. R 2 = 0.44. The same was true for the following subscales: O-C, with F (6, 170) = 24.000, p < 0.001, adj. R 2 = 0.44; I-S, with F (6, 170) = 9.044, p < 0.001, adj. R 2 = 0.22; DEP, with F (6, 170) = 23.496, p < 0.001, adj. R 2 = 0.43; ANX, with F (6, 170) = 22.541, p < 0.001, adj. R 2 = 0.42; HOS, with F(6, 170) = 8.819, p < 0.001, adj. R 2 = 0.21; PHOB, with F (6, 170) = 9.672, p < 0.001, adj. R 2 = 0.23; PAR, with F (6, 170) = 13.517, p < 0.001, adj. R 2 = 0.30; PSY, with F (6, 170) = 17.806, p < 0.001, adj. R 2 = 0.36. In addition, the multiple regression model for the Global Severity Index statistically significantly predicted the value of this index at T2, F (6, 170) = 24.503, p < 0.001, adj. R 2 = 0.45. In nine of ten regression models, both FRIPOS group membership and the QoL subscale significantly contributed to prediction, with a significance level of at least p < 0.05. Somatization symptomatology was the only subscale for which FRIPOS group membership did not make a statistically significant contribution to prediction, whereas the QoL subscale did. Regression coefficients and standard errors are provided in Table .
The integrated approach proposed in the FRIPOS project seems to be an adequate response to the claims that appear in the literature related to the concept of holistic management and joins the studies that have already demonstrated its effectiveness and importance . Indeed, the inclusion of the psycho-oncologist in the treatment has brought significant benefits or at least a state of stability in many areas (in contrast to what happened in the group TAU), especially in relation to several aspects highlighted in the literature: the development of symptomatic manifestations related to psychopathology as a result of or during the journey against cancer and the indices of psychological functioning, especially at the emotional level . Specifically, improvements in symptomatic manifestations (somatization, obsessiveness, depression, anxiety, hostility, phobic anxiety, paranoid ideation, and psychoticism), emotional functioning, and future outlook were observed in the FRIPOS group. The significantly worse score on the I-S scale in the TAU group could be related to the challenges of a cancer diagnosis at the interpersonal level, such as changes in body image and loss of femininity, or at the level of the life-threatening event . The personal support in the FRIPOS group might have helped patients to cope with and regulate feelings of inadequacy, inferiority, and discomfort in interpersonal interactions, but because of the complexity of the physical and psychological correlates of breast cancer, this aspect needs further investigation. In addition, the results show the effectiveness of the integrated intervention in terms of quality-of-life scales, especially in terms of fatigue, dyspnea, and sleep disturbances. This result can be taken as an indication of the importance of assessing not only the level of distress, but a more comprehensive picture of the person (including the assessment of unmet needs), taking into account in particular nonverbal behaviors as part of a holistic approach that can be of greater use for interventions in breast cancer patients. One of the advantages of the FRIPOS personalized approach is that it can be adapted to the specific needs of the patient. The psycho-oncologist throughout the oncological path can therefore implement psychological support on the basis of the particular psychological functioning of the person. In line with the literature , starting from clinical observation, we found that the patients’ needs concerned emotional health, continuity of care, information, adverse effects on their own body and mind, information, and social support. The composition of these needs in each patient varied on the basis of age, stage of treatment, social status, and family composition. In nine of ten regression models for psychological symptomatology (all except the somatization subscale), membership in the FRIPOS group together with the quality-of-life subscales significantly contributed to the prediction of the mean score of the respective subscale at T2. These results suggest that the FRIPOS program was effective, confirming the hypothesis and joining the ranks of studies demonstrating the importance of integrated approaches . Furthermore, the regression analyses underscore the importance of monitoring quality of life in relation to the potential psychopathological consequences of breast cancer diagnosis and cancer-related treatments, as called for by the scientific community . An integrated approach also allows the psycho-oncologist to continuously and directly assess the patient's psychophysical state, as unmediated observation allows for the capture of all implicit stress signals that cannot always be accurately captured by conventional stress measurement tools . Another strength of this integrated support is that it addresses all patients indiscriminately, i.e., both those who are unable or unwilling to consciously ask for help and those who face unmet needs and find a setting in which to talk about their problems. The integrated intervention does not take the form of a psychological takeover through coercion, but by offering a resource that the patient can access directly according to the perceived needs, an intervention that can adapt to the complexity of each case, and the heterogeneity of the types of patients . Further studies on the cost-effectiveness of an integrated approach are needed, especially for health systems such as the Italian one, which faces an increasing demand for integrated psycho-oncology services but also dwindling financial resources, especially for psychosocial care. Despite methodological limitations such as the relatively small sample size, the use of self-reports, and a possible implicit influence of the greater participation in the FRIPOS group (in which the women who completed all three surveys were more numerous), this study may provide important general insights for the clinical setting, including various health situations in which the presence of a parallel psychological support network in conjunction with medical interventions could significantly affect the quality of life and psychophysiological balance of those involved in the treatment process. Another limitation is that due to the pioneering and exploratory nature of this project, there is not yet a manualization of the integrative interventions. For this reason, the intervention was planned using information from the scientific literature on actual difficulties reported by cancer patients (especially breast cancer patients), but no measures were taken to ensure consistency of treatment. This problem needs to be addressed in the future. Potential psychological variables that would be useful to examine in future research involve perceived support by family and partner and specific personality traits.
The FRIPOS project aims to create a holistic pathway for the psycho-oncological care of cancer patients. It also aims to provide Italian health policy makers with a solid decision-making basis for the timely introduction of integrated psycho-oncology services in the Italian health system. In this study, the involvement of a psycho-oncologist during cancer treatment was found to be crucial for breast cancer patients’ psychological well-being and coping with several aspects related to the disease. To improve the situation of women with breast cancer (as well as other cancer patients), it is desirable that psychological support be offered to provide patients with a flexible, integrated, and adaptable treatment pathway for each person’s specific individuality.
|
Applicability of crAssphage as a performance indicator for viral reduction during activated sludge wastewater treatment
|
94e60e35-d8ba-4800-8f82-03ced43bc73b
|
10104927
|
Microbiology[mh]
|
Waterborne infections continue to have far-reaching public health and socioeconomic consequences in both the developed and developing worlds. WHO estimates that unsafe water, sanitation, and hygiene cause ~2 million deaths annually, mainly related to infectious diarrhea (WHO ). While viral pathogens are found in water, most countries still use classic fecal indicator bacteria (FIB) which have well-known shortcomings, including insufficiently reflecting viral risk to human health. Many reasons contribute to this, including their increased susceptibility to wastewater/water treatment, sensitivity to disinfectants, low tolerance to environmental conditions, and co-occurrence in animal species (Boehm et al. ; Harwood et al. ; Payment and Locas ). Enteric viruses are the most prevalent causative agents of gastroenteritis worldwide. Over 150 human pathogenic viruses have been detected in watercourses (Fong and Lipp ; Rodríguez-Lázaro et al. ). Thus, it is not practical to test the water samples for all of the enteric viruses; thus, surrogate indicators are still needed. Viral fecal pollution indicators have previously been suggested, not yet been extensively utilized for regulatory uses. These previously discovered markers are divided into two categories: human pathogens and bacteriophages. Human pathogens formerly considered as viral water quality indicators include human adenovirus (HAdV), human polyomavirus (HPyV), and Aichi virus 1 (AiV-1) (Albinana-Gimenez et al. ; Hamza et al. ; Kitajima et al. ). These viral indicators have the benefit of being very specific to humans, but they are limited by low and unpredictable quantities in wastewater. Bacteriophages have also been proposed as indicators of water quality. These phage-based approaches fulfill the criteria for an ideal viral water quality indicator, such as higher concentrations in wastewater than many human pathogenic viruses and rapid and easy culturability than human viral pathogens (Grabow ). Limited specificity to human fecal waste and lower concentrations than other recently found viral targets are potential obstacles to the use of coliphage as an indicator (Grabow ; Jofre et al. ). CrAssphage was identified by metagenomic analysis and was claimed to be the most prevalent virus in the human gut (Dutilh et al. ) before being proven to be globally dispersed (Edwards et al. ). It was highly abundant in the USA and Europe compared to Africa and Asia (Stachler and Bibby ). Further metagenomic analysis revealed that crAssphage is highly specific to human fecal material and was proposed for human fecal source identification (Stachler and Bibby ). However, previous research has identified crAssphage in seagull, dog, chicken, cat, and cow feces at lower quantities than in human sewage (Ahmed et al. , ; Stachler and Bibby ). Recent studies have also effectively identified crAssphage in various water matrices impacted with human fecal pollution including river water (Ballesté et al. ; Farkas et al. ), lake (Ahmed et al. ), stormwater (Ahmed et al. ), and seawater (Sala-Comorera et al. ; Sangkaew et al. ), showing that crAssphage may be used to identify viral contamination by municipal wastewater. The presence of crAssphage in sewage-impacted waters has also been linked to a higher risk for human health (Crank et al. ). Despite the fact that crAssphage has been studied extensively as a human fecal marker, few studies have yet been performed to assess crAssphage as a process indicator in conventional activated sludge wastewater treatment facilities (Tandukar et al. ; Wu et al. ). Also, to our knowledge, no data are available on crAssphage in the Egyptian environment. Thus, the primary objectives of the present study were to assess crAssphage removal during activated sludge wastewater treatment and the suitability of crAssphage as a viral process indicator. Over 1-year study, the occurrence and abundance of crAssphage in influent and effluent samples of two WWTPs in Greater Cairo were determined. Moreover, its association with human enteric viruses including HAdV, HPyV, and bocaviruses that showed high dissemination in the Egyptian environment before was demonstrated. HAdV can cause a variety of diseases including gastrointestinal, respiratory, and urinary infections. HAdV is frequently identified in a variety of water matrices (Bofill-Mas et al. ; Hamza et al. , ; Hewitt et al. ; Pina et al. ). Thus, it has been considered as an indicator of human fecal contamination in water. HPyV usually does not produce symptoms in healthy people, but it may cause severe infections in immunocompromised people. HPyV is found in wastewater across the world, and several studies have proposed HPyV as a viral fecal contamination indicator (Albinana-Gimenez et al. ; Bofill-Mas et al. ). HBoV has been isolated from stool samples collected from patients with gastroenteritis and respiratory tract samples (Allander ; Rizk et al. ; Weissbrich et al. ). Also, different studies showed that HBoV was highly abundant in environmental water samples (Blinkova et al. ; Hamza et al. ).
Study sites and sampling A total of 46 sewage samples were collected anonymously from two wastewater treatment facilities: WWTP-A and WWTP-B, located in Greater Cairo. Samples were taken monthly as grab samples over a one-year study course between 08/2018 and 07/2019. The designed capacities of these WWTPs are 330,000 m 3 /day for WWTP-A and 600,000 m 3 /day for WWTP-B. The populations served by the WWTPs are approximately 1,320,000 for WWTP-A and 2,200,000 for WWTP-B. Activated sludge is implemented in all WWTPs as a secondary treatment process. Five-liter samples were collected from both the influent and effluent. Samples were collected in sterile bottles and transported within 1 h to the laboratory for analysis. Virus concentration Virus concentration was performed employing the virus adsorption elution method reported earlier by USEPA ( ). In brief, samples were processed by adding a final concentration of 0.05 M MgCl 2 . The pH was then adjusted to 3.5 with 1 N HCl . . Then samples were filtrated by a negatively charged HA nitrocellulose membrane with 0.45 m pore size and 142 mm diameter. Prior to the viral recovery using 70 ml of organic elution buffer (3% beef extract, 0.05 M glycine, pH 9.4), the membrane was washed with 0.5 mM H 2 SO 4 , pH 4. The eluates were subjected to an organic flocculation technique for viral re-concentration. DNA extraction Viral DNA was extracted from 200 µl of concentrated suspension using QIAamp DNA Blood Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer’s instructions. Sterile nuclease-free water was included in each set of extractions as a negative control to monitor cross-contamination. Since environmental samples may have PCR inhibitors which can lead to underestimation of viral concentrations, the frequency of positive samples, murine norovirus (MNV), was added to the samples during extraction as an exogenous control to identify the occurrence of PCR inhibition. A comparison of the Ct value of the MNV to that of the negative control showed no inhibitory effect (data not shown). Quantification of viral genome by qPCR In this current study, HBoVs, HAdV, and HPyV were included as human viruses and crAssphage was tested as an indicator virus. Table contains a list of all the primers utilized in the current study. The quantification methodology for HBoV-1 targets the NP1 gene, according to Hamza et al. ( ). The quantification of HBoV-2, -3, and -4 employed a single-sense primer, whereas qPCR for HBoV-2 and -4 used the same antisense primer, according to Kantola et al. ( ). DNA standards of HBoVs were prepared according to Hamza et al. ( ). HAdV qPCR assay was used by Heim et al. ( ), and HPyV qPCR was used according to Biel et al. ( ). The DNA standards of HAdV and HPyV were prepared according to Hamza et al. ( ). CrAssphage concentrations were determined using the CPQ_56 assay developed by Stachler et al. ( ). TaqMan probe assay was used for the quantification of all viruses except HBoV-2/4, and 3 SYBR green qPCR assay was conducted. TaqMan real-time qPCR reactions were performed in a total volume of 20 µl containing 1 × (10 µl) Quantitect probe PCR kit (Qiagen, Hilden, Germany), 0.5 µM for both forward and reverse primers, 0.2 µM Taqman probe, and 2 µl DNA template. The qPCR program was 95 °C for 15 min as the initial activation step for HotStart Taq DNA Polymerase and 45 cycles of 2-step cycling for 15 s at 94 °C and 1 min at 60 °C. HBoV-2/4 and 3 SYBR green assays were conducted using Maxima SYBR Green qPCR Master Mix Kit (Thermo Scientifc). The PCR conditions were 10 min initial denaturation step at 95 °C, 45 cycles of denaturation at 95 °C for 15 s, and annealing extension at 60 °C for 1 min. Amplification was followed by one cycle of melting curve analysis. Dissociation was carried out from 60 to 95 °C with a temperature ramp of 0.05 °C/s. Analysis indicated a melting peak 81.5 °C ± 0.3 °C for HBoV 2/4 and 80 °C ± 0.2 °C for HBoV-3. In order to exclude data of cross-contamination, negative controls (NTC) were included in each run as nuclease-free water. All NTCs were negative throughout the qPCRs. The amplification and data analysis were performed using Rotorgene 6000. Statistical analysis The viral concentrations were expressed as gc/l of wastewater. Kruskal–Wallis test was used for multiple comparison procedures to determine possible significant variations in the concentrations of crAssphage and human enteric viruses. Human viruses concentrations were normalized as the ratios over crAssphage concentrations to evaluate differential fate. Wilcox test was used to compare the ratio of enteric viruses over crAssphage from influents and effluents. Spearman’s rank correlation coefficients ( r ) were calculated between viral concentrations using two-tailed 95% confidence intervals.
A total of 46 sewage samples were collected anonymously from two wastewater treatment facilities: WWTP-A and WWTP-B, located in Greater Cairo. Samples were taken monthly as grab samples over a one-year study course between 08/2018 and 07/2019. The designed capacities of these WWTPs are 330,000 m 3 /day for WWTP-A and 600,000 m 3 /day for WWTP-B. The populations served by the WWTPs are approximately 1,320,000 for WWTP-A and 2,200,000 for WWTP-B. Activated sludge is implemented in all WWTPs as a secondary treatment process. Five-liter samples were collected from both the influent and effluent. Samples were collected in sterile bottles and transported within 1 h to the laboratory for analysis. Virus concentration Virus concentration was performed employing the virus adsorption elution method reported earlier by USEPA ( ). In brief, samples were processed by adding a final concentration of 0.05 M MgCl 2 . The pH was then adjusted to 3.5 with 1 N HCl . . Then samples were filtrated by a negatively charged HA nitrocellulose membrane with 0.45 m pore size and 142 mm diameter. Prior to the viral recovery using 70 ml of organic elution buffer (3% beef extract, 0.05 M glycine, pH 9.4), the membrane was washed with 0.5 mM H 2 SO 4 , pH 4. The eluates were subjected to an organic flocculation technique for viral re-concentration. DNA extraction Viral DNA was extracted from 200 µl of concentrated suspension using QIAamp DNA Blood Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer’s instructions. Sterile nuclease-free water was included in each set of extractions as a negative control to monitor cross-contamination. Since environmental samples may have PCR inhibitors which can lead to underestimation of viral concentrations, the frequency of positive samples, murine norovirus (MNV), was added to the samples during extraction as an exogenous control to identify the occurrence of PCR inhibition. A comparison of the Ct value of the MNV to that of the negative control showed no inhibitory effect (data not shown). Quantification of viral genome by qPCR In this current study, HBoVs, HAdV, and HPyV were included as human viruses and crAssphage was tested as an indicator virus. Table contains a list of all the primers utilized in the current study. The quantification methodology for HBoV-1 targets the NP1 gene, according to Hamza et al. ( ). The quantification of HBoV-2, -3, and -4 employed a single-sense primer, whereas qPCR for HBoV-2 and -4 used the same antisense primer, according to Kantola et al. ( ). DNA standards of HBoVs were prepared according to Hamza et al. ( ). HAdV qPCR assay was used by Heim et al. ( ), and HPyV qPCR was used according to Biel et al. ( ). The DNA standards of HAdV and HPyV were prepared according to Hamza et al. ( ). CrAssphage concentrations were determined using the CPQ_56 assay developed by Stachler et al. ( ). TaqMan probe assay was used for the quantification of all viruses except HBoV-2/4, and 3 SYBR green qPCR assay was conducted. TaqMan real-time qPCR reactions were performed in a total volume of 20 µl containing 1 × (10 µl) Quantitect probe PCR kit (Qiagen, Hilden, Germany), 0.5 µM for both forward and reverse primers, 0.2 µM Taqman probe, and 2 µl DNA template. The qPCR program was 95 °C for 15 min as the initial activation step for HotStart Taq DNA Polymerase and 45 cycles of 2-step cycling for 15 s at 94 °C and 1 min at 60 °C. HBoV-2/4 and 3 SYBR green assays were conducted using Maxima SYBR Green qPCR Master Mix Kit (Thermo Scientifc). The PCR conditions were 10 min initial denaturation step at 95 °C, 45 cycles of denaturation at 95 °C for 15 s, and annealing extension at 60 °C for 1 min. Amplification was followed by one cycle of melting curve analysis. Dissociation was carried out from 60 to 95 °C with a temperature ramp of 0.05 °C/s. Analysis indicated a melting peak 81.5 °C ± 0.3 °C for HBoV 2/4 and 80 °C ± 0.2 °C for HBoV-3. In order to exclude data of cross-contamination, negative controls (NTC) were included in each run as nuclease-free water. All NTCs were negative throughout the qPCRs. The amplification and data analysis were performed using Rotorgene 6000.
Virus concentration was performed employing the virus adsorption elution method reported earlier by USEPA ( ). In brief, samples were processed by adding a final concentration of 0.05 M MgCl 2 . The pH was then adjusted to 3.5 with 1 N HCl . . Then samples were filtrated by a negatively charged HA nitrocellulose membrane with 0.45 m pore size and 142 mm diameter. Prior to the viral recovery using 70 ml of organic elution buffer (3% beef extract, 0.05 M glycine, pH 9.4), the membrane was washed with 0.5 mM H 2 SO 4 , pH 4. The eluates were subjected to an organic flocculation technique for viral re-concentration.
Viral DNA was extracted from 200 µl of concentrated suspension using QIAamp DNA Blood Mini Kit (Qiagen, Hilden, Germany) according to the manufacturer’s instructions. Sterile nuclease-free water was included in each set of extractions as a negative control to monitor cross-contamination. Since environmental samples may have PCR inhibitors which can lead to underestimation of viral concentrations, the frequency of positive samples, murine norovirus (MNV), was added to the samples during extraction as an exogenous control to identify the occurrence of PCR inhibition. A comparison of the Ct value of the MNV to that of the negative control showed no inhibitory effect (data not shown).
In this current study, HBoVs, HAdV, and HPyV were included as human viruses and crAssphage was tested as an indicator virus. Table contains a list of all the primers utilized in the current study. The quantification methodology for HBoV-1 targets the NP1 gene, according to Hamza et al. ( ). The quantification of HBoV-2, -3, and -4 employed a single-sense primer, whereas qPCR for HBoV-2 and -4 used the same antisense primer, according to Kantola et al. ( ). DNA standards of HBoVs were prepared according to Hamza et al. ( ). HAdV qPCR assay was used by Heim et al. ( ), and HPyV qPCR was used according to Biel et al. ( ). The DNA standards of HAdV and HPyV were prepared according to Hamza et al. ( ). CrAssphage concentrations were determined using the CPQ_56 assay developed by Stachler et al. ( ). TaqMan probe assay was used for the quantification of all viruses except HBoV-2/4, and 3 SYBR green qPCR assay was conducted. TaqMan real-time qPCR reactions were performed in a total volume of 20 µl containing 1 × (10 µl) Quantitect probe PCR kit (Qiagen, Hilden, Germany), 0.5 µM for both forward and reverse primers, 0.2 µM Taqman probe, and 2 µl DNA template. The qPCR program was 95 °C for 15 min as the initial activation step for HotStart Taq DNA Polymerase and 45 cycles of 2-step cycling for 15 s at 94 °C and 1 min at 60 °C. HBoV-2/4 and 3 SYBR green assays were conducted using Maxima SYBR Green qPCR Master Mix Kit (Thermo Scientifc). The PCR conditions were 10 min initial denaturation step at 95 °C, 45 cycles of denaturation at 95 °C for 15 s, and annealing extension at 60 °C for 1 min. Amplification was followed by one cycle of melting curve analysis. Dissociation was carried out from 60 to 95 °C with a temperature ramp of 0.05 °C/s. Analysis indicated a melting peak 81.5 °C ± 0.3 °C for HBoV 2/4 and 80 °C ± 0.2 °C for HBoV-3. In order to exclude data of cross-contamination, negative controls (NTC) were included in each run as nuclease-free water. All NTCs were negative throughout the qPCRs. The amplification and data analysis were performed using Rotorgene 6000.
The viral concentrations were expressed as gc/l of wastewater. Kruskal–Wallis test was used for multiple comparison procedures to determine possible significant variations in the concentrations of crAssphage and human enteric viruses. Human viruses concentrations were normalized as the ratios over crAssphage concentrations to evaluate differential fate. Wilcox test was used to compare the ratio of enteric viruses over crAssphage from influents and effluents. Spearman’s rank correlation coefficients ( r ) were calculated between viral concentrations using two-tailed 95% confidence intervals.
Detection rates of crAssphage and human viruses Over a one-year study, all viruses could be detected in the tested wastewater samples at different frequencies (Table ). HboV-2/4, HBoV-3, and crAssphage were the most frequently detected in influent samples of WWTPs. Influent samples were positive for at least 5 out of six viruses. No clear seasonal pattern was observed for neither the human pathogenic viruses nor the indicators. In effluent samples, there was a slight difference between the detection rates of human enteric viruses, except for HBoV-1; it was only detected in seven and four samples of WWTP-A and B, respectively. CrAssphage was identified in 100% ( n = 23) of effluent samples (Table ). Concentrations of crAssphage and human viruses The concentration of crAssphage in influent samples was significantly higher than those of HAdV, HPyV, and HBoVs (ANOVA), p < 0.0001. In wastewater influent samples, the concentration of crAssphage ranged from 1.45E + 04 to 1.02E + 08 gc/l in WWTP-A and from 3.51E + 05 to 2.39E + 08 gc/l in WWTP-B (Fig. ). Regarding human viruses, HBoVs were detected in effluents wastewater samples at concentration orders of magnitude lower than HAdV and HPyV (Fig. ). Similarly, in effluent samples of WWTPs, the concentration of crAssphage was significantly higher than HBoVs and HPyV, ranging from 1.25E + 04 to 6.29E + 06 gc/l in WWTP-A and 7.49E + 04 to 7.88E + 06 gc/l for WWTP-B (Kruskal–Wallis test, P = 0.0001) (Fig. ). However, no significant difference between crAssphage concentration and HAdV in effluent wastewater samples was identified. Additionally, in Fig. , the overall annual viral concentration is compared between influent and effluent samples of WWTPs. CrAssphage and human enteric viruses concentrations were relatively stable during the study course. Viral reduction during the treatment process The annual mean reduction of all tested viruses was relatively similar, varying between ~ 1 ± 0.64 log 10 for HBoVs, 0.84 ± 0.5 log 10 for HAdV, 1.1 ± 0.8log 10 for HPyV, and 1.32 ± 0.7log 10 for crAssphage. No significant difference between viral reduction was observed. Figure shows ratios of human pathogenic viruses concentrations at influent and effluent samples, normalized over crAssphage concentrations. The ratio showed a slight increase from influents to effluents. These ratios were used to assess the differences in the fate of crAssphage and other human viruses during the wastewater treatment process. Only samples with both targets within the quantifiable range were considered in pair comparison. CrAssphage correlation with human viruses Spearman’s rank correlation coefficients ( r ) were determined between human viruses and crAssphage concentrations in influent and effluent wastewater samples. As noted in Table , a strong positive correlation ( P = 0.001) was found between crAssphage and HPyV in influent samples. Also, a significant correlation ( P < 0.05) was detected between crAssphage and both HAdV and HPyV in the treated samples.
Over a one-year study, all viruses could be detected in the tested wastewater samples at different frequencies (Table ). HboV-2/4, HBoV-3, and crAssphage were the most frequently detected in influent samples of WWTPs. Influent samples were positive for at least 5 out of six viruses. No clear seasonal pattern was observed for neither the human pathogenic viruses nor the indicators. In effluent samples, there was a slight difference between the detection rates of human enteric viruses, except for HBoV-1; it was only detected in seven and four samples of WWTP-A and B, respectively. CrAssphage was identified in 100% ( n = 23) of effluent samples (Table ). Concentrations of crAssphage and human viruses The concentration of crAssphage in influent samples was significantly higher than those of HAdV, HPyV, and HBoVs (ANOVA), p < 0.0001. In wastewater influent samples, the concentration of crAssphage ranged from 1.45E + 04 to 1.02E + 08 gc/l in WWTP-A and from 3.51E + 05 to 2.39E + 08 gc/l in WWTP-B (Fig. ). Regarding human viruses, HBoVs were detected in effluents wastewater samples at concentration orders of magnitude lower than HAdV and HPyV (Fig. ). Similarly, in effluent samples of WWTPs, the concentration of crAssphage was significantly higher than HBoVs and HPyV, ranging from 1.25E + 04 to 6.29E + 06 gc/l in WWTP-A and 7.49E + 04 to 7.88E + 06 gc/l for WWTP-B (Kruskal–Wallis test, P = 0.0001) (Fig. ). However, no significant difference between crAssphage concentration and HAdV in effluent wastewater samples was identified. Additionally, in Fig. , the overall annual viral concentration is compared between influent and effluent samples of WWTPs. CrAssphage and human enteric viruses concentrations were relatively stable during the study course. Viral reduction during the treatment process The annual mean reduction of all tested viruses was relatively similar, varying between ~ 1 ± 0.64 log 10 for HBoVs, 0.84 ± 0.5 log 10 for HAdV, 1.1 ± 0.8log 10 for HPyV, and 1.32 ± 0.7log 10 for crAssphage. No significant difference between viral reduction was observed. Figure shows ratios of human pathogenic viruses concentrations at influent and effluent samples, normalized over crAssphage concentrations. The ratio showed a slight increase from influents to effluents. These ratios were used to assess the differences in the fate of crAssphage and other human viruses during the wastewater treatment process. Only samples with both targets within the quantifiable range were considered in pair comparison. CrAssphage correlation with human viruses Spearman’s rank correlation coefficients ( r ) were determined between human viruses and crAssphage concentrations in influent and effluent wastewater samples. As noted in Table , a strong positive correlation ( P = 0.001) was found between crAssphage and HPyV in influent samples. Also, a significant correlation ( P < 0.05) was detected between crAssphage and both HAdV and HPyV in the treated samples.
The concentration of crAssphage in influent samples was significantly higher than those of HAdV, HPyV, and HBoVs (ANOVA), p < 0.0001. In wastewater influent samples, the concentration of crAssphage ranged from 1.45E + 04 to 1.02E + 08 gc/l in WWTP-A and from 3.51E + 05 to 2.39E + 08 gc/l in WWTP-B (Fig. ). Regarding human viruses, HBoVs were detected in effluents wastewater samples at concentration orders of magnitude lower than HAdV and HPyV (Fig. ). Similarly, in effluent samples of WWTPs, the concentration of crAssphage was significantly higher than HBoVs and HPyV, ranging from 1.25E + 04 to 6.29E + 06 gc/l in WWTP-A and 7.49E + 04 to 7.88E + 06 gc/l for WWTP-B (Kruskal–Wallis test, P = 0.0001) (Fig. ). However, no significant difference between crAssphage concentration and HAdV in effluent wastewater samples was identified. Additionally, in Fig. , the overall annual viral concentration is compared between influent and effluent samples of WWTPs. CrAssphage and human enteric viruses concentrations were relatively stable during the study course.
The annual mean reduction of all tested viruses was relatively similar, varying between ~ 1 ± 0.64 log 10 for HBoVs, 0.84 ± 0.5 log 10 for HAdV, 1.1 ± 0.8log 10 for HPyV, and 1.32 ± 0.7log 10 for crAssphage. No significant difference between viral reduction was observed. Figure shows ratios of human pathogenic viruses concentrations at influent and effluent samples, normalized over crAssphage concentrations. The ratio showed a slight increase from influents to effluents. These ratios were used to assess the differences in the fate of crAssphage and other human viruses during the wastewater treatment process. Only samples with both targets within the quantifiable range were considered in pair comparison.
Spearman’s rank correlation coefficients ( r ) were determined between human viruses and crAssphage concentrations in influent and effluent wastewater samples. As noted in Table , a strong positive correlation ( P = 0.001) was found between crAssphage and HPyV in influent samples. Also, a significant correlation ( P < 0.05) was detected between crAssphage and both HAdV and HPyV in the treated samples.
No data are available on crAssphage in the Egyptian environment. The primary objectives of the current work were to assess crAssphage reduction during wastewater treatment and its usefulness as a viral process indicator of the treatment process. Thus, targeted research typically selects pathogens that are more relevant to humans or that are more abundant in wastewater. Some of these viruses have been involved in the present study. Samples were taken from influents and effluents of two WWTPs, and the results of crAssphage genome levels were compared with that of different human enteric viruses. All wastewater samples tested positive for crAssphage (Table ), with no identifiable seasonal variations. Also, both WWTPs showed relatively the same range of viral concentrations (Fig. ) due to using the same treatment technology regardless of their treatment capacity. In raw sewage, the annual crAssphage concentrations varied between 1.45E + 04 and 2.39E + 08 gc/l (Fig. ). The log 10 concentrations of crAssphage in our study are lower than the previously detected values in Florida, USA (9–10 log 10 gc/l) (Ahmed et al. ), Spain (8.4–9.9log 10 gc/l) (García‐Aljaro et al. ), Japan (10.98–12.03log 10 gc/l) (Malla et al. ), Indiana, USA (8.23 ± 0.36 log 10 gc/l) (Wu et al. ), and UK (5.3–9 log 10 gc/l) (Farkas et al. ). The detected crAssphage concentration in this study is relatively the same as the previously reported in Thailand (5.23–7.19 log 10 gc/l) (Kongprajug et al. ). On the other hand, crAssphage concentrations in effluent samples ranged from 1.25E + 04 to 7.88E + 06 gc/l which is relatively the same range as determined in effluent samples examined by Kongprajug et al. ( ). Whereas others from different geographical areas have reported higher concentrations of crAssphage in effluent samples (Ballesté et al. ; Malla et al. ; Tandukar et al. ). The difference in crAssphage between different studies could be attributed to the different geographic distribution of viruses, the capacity of WWTPs, and the difference in industrialized lifestyle (Honap et al. ; Stachler and Bibby ). Moreover, using different concentration techniques, processed water samples, and the quantification method can contribute to the discrepancies in the viral concentrations from different investigations. Additionally, the diversity of crAssphage in the human gut has been recently described (Edwards et al. ). It is likely that such natural diversity in crAssphage was not detected by the CPQ56 assay which was designed based on the prototype crAssphage. A comparison between the level of crAssphage and human viruses showed that in influent and effluent samples, the mean concentration of crAssphage has one order of magnitude higher than HAdV and HPyV and three orders of magnitude higher than HBoVs (Fig. ). Similar trends have been observed in recent reports. Farkas et al. ( ) estimated that all viruses-positive wastewater samples contained approximately 2 log 10 higher crAssphage than other enteric viruses such as NoV, AdV, and HPyV. Also, crAssphage was up to 5 orders of magnitude higher than HPyV in wastewater (Stachler et al. ). The present data showed no seasonal pattern for human viruses and crAssphage. This finding is consistent with other year-long monitoring investigations that have also revealed the constant existence of crAssphage in treated wastewater without seasonal variations (Crank et al. ; Farkas et al. ; Wu et al. ). Meanwhile, the levels of human enteric viruses may have more variations according to the clinical situation of the population. In general, the annual mean reduction of all tested viruses between ~ 1 ± 0.64 log 10 for HBoVs, 0.84 ± 0.5 log 10 for HAdV, 1.1 ± 0.8 log 10 for HPyV, and 1.32 ± 0.7 log 10 for crAssphage. Our results agree with Farkas et al. who found up to 2 log 10 reduction in crAssphage using activated sludge treatment and lower reduction levels (1 log 10 ) by biofilter treatment (Farkas et al. ). Tandukar et al. ( ) observed that crAssphage had the greatest removal ratio (3.3 ± 1.0 log 10 ) among studied enteric viruses such as HPyV, NoVGII, EV, and AiV. Accordingly, Tandukar et al. ( ) argued that crAssphage cannot be used as an indication of viral reduction throughout wastewater treatment. Another study by Wu et al. ( ) reported that the log 10 reduction of crAssphage (2.88 ± 0.68) during wastewater treatment was relatively higher than HAdV (2.24 ± 0.53) or HPyV (1.51 ± 0.37). Although crAssphage had a greater initial concentration in the main influent, the variation in removal is likely limited to crAssphage since it was eliminated in a higher fraction than HAdV or HPyV after secondary treatment (Wu et al. ). Ultimately, the log 10 removal rate of HBoV, HAdV, and HPyV during activated sludge treatment was reported as 0.35–1 log 10 , 0.8–3.7 log 10 , and 1.0–3.7 log 10 , respectively (Hamza et al. , ; Kitajima et al. ; Sangkaew et al. ; Schmitz et al. ). While crAssphage log reductions are less variable than that of other viruses, the results suggest that crAssphage has a high potential as a process indicator for pathogenic viral reduction during wastewater treatment. The ratio of viruses over crAssphage (Fig. ) has been slightly increased from inlet to outlet samples indicating slightly lower removal of human viruses than crAssphage (Wilcox test, P > 0.05). Notably, crAssphage was detected in all samples, and lower detected rates have been identified for other human viruses. Data normalization over crAssphage has been proposed before to assess the performance of the wastewater treatment process. For instance, Wu et al. ( ) reported that ratios of HAdV/CPQ56 and HPyV/CPQ56 increased during secondary treatment, indicating that both viruses were removed relatively smaller than crAssphage. However, both viruses had the same removal mechanism owing to the correlation between crAssphage and HAdV and HPyV. A correlation between viral human fecal indicators and viral pathogens in wastewater is required to obtain an accurate picture of the viral risk posed by human feces. The present study compared the concentration of crAssphage with HBoVs, HPyV, and HAdV. In influent samples, the co-occurrence analysis between crAssphage and human viruses revealed a strong positive correlation between crAssphage and HPyV. However, HAdV and HPyV correlated with crAssphage in effluent samples (Table ). This finding is consistent with a report of crAssphage concentration correlating with HPyV and HAdV through a WWTP (Wu et al. ). Similarly, Crank et al. ( ) observed a positive correlation between crAssphage and DNA viruses (HPyV, HBoV) in raw sewage samples and no correlation was found between crAssphage and HEV. Although the virus enrichment approach may affect this association, the correlation between crAssphage and HPyV was stable regardless of the concentration method (Crank et al. ). Additionally, concentrations of crAssphage in raw wastewater correlated positively with the concentrations of HAdV, HPyV, and NoVGII ( p < 0.05), suggesting the applicability of crAssphage as a suitable indicator to estimate human enteric virus concentrations in raw wastewater. Likewise, Farkas et al. ( ) found a positive correlation between HPyV and crAssphage in both influent and effluent samples. It should be noted that locality and crAssphage marker selection in qPCR assays were likely to contribute to the observed correlations (Sabar et al. ). Future studies should investigate which crAssphage markers correlate well with each water-related pathogen in different locations. The ideal viral indicator to assess the performance in wastewater should be prevalent at a high concentration in raw sewage and has similar or more persistence in wastewater treatment than the pathogenic viruses of the reduction target. CrAssphage possesses several properties that would make it a potential viral process indicator during wastewater treatment. In raw sewage, it was the most abundant of the fecal markers utilized in the current investigation, making it easier to determine. The virus was more persistent during the treatment process than human viruses enabling the performance assessment. Also, high values of crAssphage could be found in the effluent samples, promoting the evaluation of the treatment process in terms of log reduction. Alternatively, crAssphage meets Bonde’s criteria for an ideal indicator of waterborne pathogens, which include (i) being present when the pathogens are present, (ii) occurring in greater numbers than pathogens, and (iii) being more resistant to disinfectants and to aqueous environments than the pathogens (NASEM ).
The current study aimed to assess crAssphage reduction in WWTPs and to evaluate its usefulness as a viral process indicator during the treatment process. When crAssphage was compared to human viruses, crAssphage was highly abundant in both raw and treated wastewater samples without a significant difference in the removal rate. Importantly, crAssphage is associated with different human viruses in raw and treated wastewater samples. Also, the high co-occurrence and comparable destiny of crAssphage to human viruses such as HAdV and HPyV during the treatment process shows that crAssphage and human viral pathogens have similar removal mechanisms. These findings provide additional evidence of the usefulness crAssphage as a process indicator for wastewater treatment. Additionally, the constant high prevalence, abundance, and association with human pathogenic viruses including HAdV and HPyV in wastewater support its use as a conservative viral indicator of human fecal pollution. Since this study compared the fate of crAssphage and human DNA viruses in WWTPs, further evaluation including RNA viruses should be performed.
|
The feasibility of existing JADAS10 cut-off values in clinical practice: a study of data from The Finnish Rheumatology Quality Register
|
bd40393d-08f3-4330-a00f-01ae86a97428
|
10105448
|
Internal Medicine[mh]
|
Juvenile idiopathic arthritis (JIA) refers to chronic arthritis that begins before the age of 16 years . Early optimal treatment improves the outcome for this condition . The ideal treatment goal is clinically inactive disease (CID) , but this is not always possible. It is important to evaluate disease activity on each patient visit and adjust treatment when needed. Accordingly, there have been numerous attempts to develop tools that objectively express the activity of this disease. Disease activity has been divided into different states based on clinical criteria . The Wallace preliminary criteria for CID have been expanded to the American College of Rheumatology (ACR) provisional criteria for CID , which also embrace the duration of morning stiffness. The Wallace preliminary definition of CID and the ACR provisional criteria of CID have been used consistently in paediatric research. The literature contains several clinical definitions for minimal or low disease activity (LDA), moderate disease activity (MDA), and high disease activity (HDA) . Interpreting some of the existing clinical criteria for disease activity levels can be complex and laborious . However, the ten-joint count juvenile arthritis disease activity score (JADAS10) and particularly the clinical JADAS10 (cJADAS10) index are more convenient for everyday practice. The JADAS10 is a continuous disease activity score specific to non-systemic onset JIA and comprises four parameters: active joint count (AJC); physician’s global assessment of disease activity (PhGA) using a 10-cm linear visual analogue scale (VAS); parent/patient global assessment of well-being (PaGA) using a 10-cm linear VAS, and erythrocyte sedimentation rate (ESR) . The cJADAS10 is a modification of the JADAS10 without considering ESR . These JADAS10 indexes create uniformity in disease activity evaluation between physicians in clinical work and in research. Nevertheless, assessing the meaning of a single JADAS10 score can be cumbersome. Thus, cut-off values for JADAS10 and cJADAS10 values have been established for disease activity states (Table ). However, some disparity exists in the current cut-off sets. The objective of this study was to investigate the performance of existing JADAS10 cut-off sets, i.e., those by Backström et al. , Consolaro et al. , and Trincianti et al. using data from real-life patients in The Finnish Rheumatology Quality Register (FinRheuma).
We retrospectively collected data from the FinRheuma register for two cohorts. These were: Cohort 1 The data from the visits between March 2016 and September 2021 at which non-systemic onset JIA diagnosis according to International League of Associations for Rheumatology (ILAR) criteria was confirmed in disease modifying anti-rheumatic drugs (DMARDs)-naïve patients with non-systemic JIA. The patients had not ever received intra-articular steroid injections at the time of the first registered visit. Cohort 2 Non-systemic onset JIA patients aged < 16 years for whom the latest visit was between January 2020 and September 2021. The two cohorts were chosen in order to get one cohort with many patients with active disease (cohort 1) and one cohort with patients mainly in remission (cohort 2). The selection was done in order to investigate the capacity of the different cut-off values to detect both patient with no or low disease activity as well as high disease activity. Only patients with oligoarthritis, extended oligoarthritis, and rheumatoid factor negative polyarthritis were included in analyses, since the cut-offs according Trincianti et al. are not validated for rheumatoid factor positive polyarthritis, psoriatic arthritis, nor enthesitis-related arthritis. The data on age, gender, ILAR category of JIA , AJC, ESR, PhGA, PaGA, and rheumatoid factor (RF) levels were obtained. We used JADAS10/cJADAS10 scores because this is the clinical practice in Finland. For both cohorts, we analysed the distribution of patients in the CID, LDA, MDA, and HDA groups according to existing JADAS10/cJADAS10 cut-off levels. At the latest visit, we also analysed the proportion of patients with AJC > 0 when classified as being in the CID or LDA groups according to existing JADAS10/cJADAS10 cut-off levels. The background data for patients with complete and incomplete data sets were compared in an attempt to detect possible bias arising from the inclusion of only patients with complete data sets. Statistics Continuous variables are expressed as median and lower (Q1) and upper (Q3) quartiles. Altogether, there were 346 non-systemic JIA patients with a recorded first visit between March 2016 and September 2021 and 1200 non-systemic JIA patients with a recorded latest visit between January 2020 and September 2021 in the FinRheuma register. The differences between clinical characteristics of those who had complete registration of JADAS10 and cJADAS10 and those who had incomplete registration were tested with the Wilcoxon rank sum test for all with continuous variables (e.g. disease duration). When comparing these complete/incomplete patients groups with categorial variables (e.g. proportion of antinuclear antibodies positive/negative) Fisher’s exact test was used. Fisher’s exact test was also used when proportions of active joint count (AJC > 0 and AJC > 1 separately) were compared between different publications. P -values lower than 0.05 (two-tailed) were considered to indicate statistical significance. Analyses were performed using SAS System for Windows, version 9.4 (SAS Institute Inc., Cary, NC, USA) and the R Statistical language (version 4.2.1; R Core Team, 2022) on Ubuntu 20.04.5 LTS. Ethics This study was conducted as a register-based study using data from the FinRheuma register. The quality register is maintained by the Finnish Institute for Health and Welfare (THL), which granted approval for the study.
The data from the visits between March 2016 and September 2021 at which non-systemic onset JIA diagnosis according to International League of Associations for Rheumatology (ILAR) criteria was confirmed in disease modifying anti-rheumatic drugs (DMARDs)-naïve patients with non-systemic JIA. The patients had not ever received intra-articular steroid injections at the time of the first registered visit.
Non-systemic onset JIA patients aged < 16 years for whom the latest visit was between January 2020 and September 2021. The two cohorts were chosen in order to get one cohort with many patients with active disease (cohort 1) and one cohort with patients mainly in remission (cohort 2). The selection was done in order to investigate the capacity of the different cut-off values to detect both patient with no or low disease activity as well as high disease activity. Only patients with oligoarthritis, extended oligoarthritis, and rheumatoid factor negative polyarthritis were included in analyses, since the cut-offs according Trincianti et al. are not validated for rheumatoid factor positive polyarthritis, psoriatic arthritis, nor enthesitis-related arthritis. The data on age, gender, ILAR category of JIA , AJC, ESR, PhGA, PaGA, and rheumatoid factor (RF) levels were obtained. We used JADAS10/cJADAS10 scores because this is the clinical practice in Finland. For both cohorts, we analysed the distribution of patients in the CID, LDA, MDA, and HDA groups according to existing JADAS10/cJADAS10 cut-off levels. At the latest visit, we also analysed the proportion of patients with AJC > 0 when classified as being in the CID or LDA groups according to existing JADAS10/cJADAS10 cut-off levels. The background data for patients with complete and incomplete data sets were compared in an attempt to detect possible bias arising from the inclusion of only patients with complete data sets. Statistics Continuous variables are expressed as median and lower (Q1) and upper (Q3) quartiles. Altogether, there were 346 non-systemic JIA patients with a recorded first visit between March 2016 and September 2021 and 1200 non-systemic JIA patients with a recorded latest visit between January 2020 and September 2021 in the FinRheuma register. The differences between clinical characteristics of those who had complete registration of JADAS10 and cJADAS10 and those who had incomplete registration were tested with the Wilcoxon rank sum test for all with continuous variables (e.g. disease duration). When comparing these complete/incomplete patients groups with categorial variables (e.g. proportion of antinuclear antibodies positive/negative) Fisher’s exact test was used. Fisher’s exact test was also used when proportions of active joint count (AJC > 0 and AJC > 1 separately) were compared between different publications. P -values lower than 0.05 (two-tailed) were considered to indicate statistical significance. Analyses were performed using SAS System for Windows, version 9.4 (SAS Institute Inc., Cary, NC, USA) and the R Statistical language (version 4.2.1; R Core Team, 2022) on Ubuntu 20.04.5 LTS. Ethics This study was conducted as a register-based study using data from the FinRheuma register. The quality register is maintained by the Finnish Institute for Health and Welfare (THL), which granted approval for the study.
Continuous variables are expressed as median and lower (Q1) and upper (Q3) quartiles. Altogether, there were 346 non-systemic JIA patients with a recorded first visit between March 2016 and September 2021 and 1200 non-systemic JIA patients with a recorded latest visit between January 2020 and September 2021 in the FinRheuma register. The differences between clinical characteristics of those who had complete registration of JADAS10 and cJADAS10 and those who had incomplete registration were tested with the Wilcoxon rank sum test for all with continuous variables (e.g. disease duration). When comparing these complete/incomplete patients groups with categorial variables (e.g. proportion of antinuclear antibodies positive/negative) Fisher’s exact test was used. Fisher’s exact test was also used when proportions of active joint count (AJC > 0 and AJC > 1 separately) were compared between different publications. P -values lower than 0.05 (two-tailed) were considered to indicate statistical significance. Analyses were performed using SAS System for Windows, version 9.4 (SAS Institute Inc., Cary, NC, USA) and the R Statistical language (version 4.2.1; R Core Team, 2022) on Ubuntu 20.04.5 LTS.
This study was conducted as a register-based study using data from the FinRheuma register. The quality register is maintained by the Finnish Institute for Health and Welfare (THL), which granted approval for the study.
Cohort 1 The FinRheuma register contained 346 DMARD-naïve non-systemic JIA patients who had a registered first visit between March 2016 and September 2021 with a confirmed JIA diagnosis according to ILAR criteria . Of these, 217/346 (63%) and 232/346 (67%) had complete registration of JADAS10 and cJADAS10 parameters. About 2/3 of the patients were girls, and the median (Q1, Q3) age was 8 (4,12) years for patients with both complete and incomplete data. There was a higher proportion of patients with polyarthritis in patients with complete data set (Table ). At the first visit there were divergent distributions of the disease activity states based on existing JADAS10 and cJADAS10 cut-off values (Fig. ). The greatest disparity was seen in the oligoarticular HDA group, where the numbers of patients in the HDA group were 67 (38%), 117 (66%), and 8 (4%,) using the cJADAS cut-offs by Backström et al. , Consolaro et al. , and Trincianti et al. , respectively. Cohort 2 There were 1200 non-systemic JIA patients with a recorded latest visit between January 2020 and September 2021 in the FinRheuma register. Of these, 640/954 (53%/80%) patients had a complete registration of JADAS10/cJADAS10 parameters at the latest visit (Table ). 100/136 (16%/14%) patients with complete registration of JADAS10 /cJADAS10 parameters in cohort 1 were also a part of cohort 2 with complete registration. At the latest visit, the majority of the patients were in the CID group (Fig. ). The greatest disparity between the different cut-offs was seen in the cJADAS10 cut-off for CID in polyarticular patients where the number of CID patients were 279, 329, and 367 using the cut-offs Backström et al. , Consolaro et al. , and Trincianti et al. , respectively. In this group, a significantly larger proportion of patients classified as being in CID had an AJC > 0 when using the JADAS10/cJADAS10 cut-offs by Trincianti et al. compared with the other cut-offs (Table ). A marked disparity between the different cut-offs was also seen in the JADAS10 and cJADAS10 cut-offs for LDA in both oligoarticular and polyarticular patients at the latest visit (Fig. ). In the polyarticular LDA group, the AJC was greater than zero in 30.7%/34.4% of patients when Backström/Consolaro JADAS10 cut-offs were used, compared with 56.8% when Trincianti JADAS10 cut-offs were used ( p < 0.001). In the LDA group 11%/10% of the polyarticular patients had an AJC of two or more when Backström JADAS10/cJADAS10 cut-offs were used, compared with 7%/3% when Consolaro JADAS10/cJADAS10 and 35%/29% when Trincianti JADAS10/cJADAS10 cut-offs were used ( p < 0.001) (Table ).
The FinRheuma register contained 346 DMARD-naïve non-systemic JIA patients who had a registered first visit between March 2016 and September 2021 with a confirmed JIA diagnosis according to ILAR criteria . Of these, 217/346 (63%) and 232/346 (67%) had complete registration of JADAS10 and cJADAS10 parameters. About 2/3 of the patients were girls, and the median (Q1, Q3) age was 8 (4,12) years for patients with both complete and incomplete data. There was a higher proportion of patients with polyarthritis in patients with complete data set (Table ). At the first visit there were divergent distributions of the disease activity states based on existing JADAS10 and cJADAS10 cut-off values (Fig. ). The greatest disparity was seen in the oligoarticular HDA group, where the numbers of patients in the HDA group were 67 (38%), 117 (66%), and 8 (4%,) using the cJADAS cut-offs by Backström et al. , Consolaro et al. , and Trincianti et al. , respectively.
There were 1200 non-systemic JIA patients with a recorded latest visit between January 2020 and September 2021 in the FinRheuma register. Of these, 640/954 (53%/80%) patients had a complete registration of JADAS10/cJADAS10 parameters at the latest visit (Table ). 100/136 (16%/14%) patients with complete registration of JADAS10 /cJADAS10 parameters in cohort 1 were also a part of cohort 2 with complete registration. At the latest visit, the majority of the patients were in the CID group (Fig. ). The greatest disparity between the different cut-offs was seen in the cJADAS10 cut-off for CID in polyarticular patients where the number of CID patients were 279, 329, and 367 using the cut-offs Backström et al. , Consolaro et al. , and Trincianti et al. , respectively. In this group, a significantly larger proportion of patients classified as being in CID had an AJC > 0 when using the JADAS10/cJADAS10 cut-offs by Trincianti et al. compared with the other cut-offs (Table ). A marked disparity between the different cut-offs was also seen in the JADAS10 and cJADAS10 cut-offs for LDA in both oligoarticular and polyarticular patients at the latest visit (Fig. ). In the polyarticular LDA group, the AJC was greater than zero in 30.7%/34.4% of patients when Backström/Consolaro JADAS10 cut-offs were used, compared with 56.8% when Trincianti JADAS10 cut-offs were used ( p < 0.001). In the LDA group 11%/10% of the polyarticular patients had an AJC of two or more when Backström JADAS10/cJADAS10 cut-offs were used, compared with 7%/3% when Consolaro JADAS10/cJADAS10 and 35%/29% when Trincianti JADAS10/cJADAS10 cut-offs were used ( p < 0.001) (Table ).
This Finnish-register-based study showed that, at the latest visit, a small but noticeable proportion of the polyarticular patients in CID and over 50% of the polyarticular patients in LDA had an AJC > 0 according to the latest JADAS10 cut-offs by Trincianti et al. . Furthermore, approximately one third of the polyarticular patients in the LDA group had an AJC of two or more, and a considerably smaller proportion of patients was classified as HDA using JADAS10 and cJADAS10 cut-offs by Trincianti et al., even in the newly diagnosed DMARD-naïve patients. Using the JADAS10 and cJADAS10 cut-offs by Consolaro et al. resulted in the lowest proportion of LDA patients with an AJC of two, both for oligoarticular and polyarticular patients. The divergence between the studies seeking to find optimal JADAS10 cut-off values might be due to differences in the cohorts as well as the statistical approaches chosen for the analyses. However, above all, the differences are due to divergent classifications of the disease activity states used as a reference. The disease activity states set by Beukelman et al. and used in the studies by Backström et al. are not validated and the HDA definition is set very high. Moreover, the Beukelman criteria state that a patient having a VAS over 2 already has MDA, even if the physician sees no signs of disease activity. This is also the weakness of the disease activity states set by Magni-Manzoni et al. and used in the studies by Consolaro et al. , since they state that a patient having a VAS over 2.1 has MDA, even if, again, the physician sees no signs of disease activity. However, the strength of those criteria is that they are objective and can be interpreted in approximately the same way, irrespective of the physician using them. In the latest study on this topic, which was a large multinational study by Trincianti et al. , disease activity states were established according to the opinion of the expert, which we suspect is a varying standard. Moreover, these cut-offs were not validated for JIA diagnoses other than those of persisted or extended oligoarthritis and seronegative polyarthritis. They are not intended for seropositive polyarthritis, psoriatic arthritis, nor enthesitis-related JIA. It is important to include the patient´s perspective in evaluating disease activity but the PaGA parameter in JADAS and cJADAS is prone to rise the JADAS/cJADAS although there are no objective signs of inflammation. It has recently been shown that PaGA correlate better with measures of Health Related Qualify of Life than measures of disease activity . Recommendations for treating Juvenile JIA to target have been formulated by an international task force . Specific treatment targets and guidance on general treatment strategies were described with intention to improve patient care in clinical practice. Despite the ongoing discussion of optimal goals, the main treatment target is preferably CID, and when this is not possible, LDA . Thus, using cut-offs where approximately one third of LDA patients has an AJC of two or more is not optimal. The proportion of LDA patients with an AJC of two was clearly lowest in both oligoarticular and polyarticular patients using the JADAS10 and cJADAS10 cut-offs by Consolaro et al., which is their great advantage. Another clear benefit of the cut-offs by Consolaro et al. is that the cut-offs for CID are the same regardless of the disease course. The other existing cut-offs require division of the patients in terms of oligoarticular and polyarticular disease courses. Since the oligoarticular and polyarticular disease courses are, rather than different disease entities, spectra of disease activity for different forms of arthritis that come under an umbrella diagnosis of JIA, we think it is both logical and practical to have only one set of JADAS10 cut-offs for disease-activity states, regardless of the oligoarticular or polyarticular disease course. The strengths of this study are the large number of analysed patients and the inclusion of both newly diagnosed DMARD-naïve patients and patients with a more long-lasting disease course. A limitation of this study is the lack of an international perspective. It has been shown that physicians in Northern Europe and Finland tend to score PhGA lower than those in other parts of the world . Thus, the results might have been different for a more geographically widespread population. In conclusion, we found the cut-offs by Consolaro et al. to be the most feasible both in clinical work and in research, since the cut-off levels for CID do not result in patients with AJC ≥ 1 being misclassified as in remission, and the proportion of patients with an AJC of two in the LDA group is the lowest using these cut-offs. A further clear benefit of the Consolaro et al. cut-offs is that the cut-off level is the same for CID in oligoarticular and polyarticular patients.
|
Use of preventive medication and supplements in general practice in patients in their last year of life: a Retrospective cohort study
|
abcbe76b-7a31-4e28-96f1-bbe0eb71d189
|
10105458
|
Family Medicine[mh]
|
Medication burden increases for patients in the last phase of life due to continuation of drugs for comorbid conditions and the addition of medications for symptom control . The median number of prescriptions per patient per day is seven during the last week of life in the Netherlands, which is defined as polypharmacy (≥ 5 medications) . Schenker et al. evaluated the associations between polypharmacy and quality of life (QoL) in 372 patients with a life-limiting illness (LLI), demonstrating a relationship between more medications and higher symptom burdens and lower quality of life . In the last phase of life some drugs can become inappropriate medications (IMs) due to 1) increased risk of adverse events caused by drug-drug interactions or changed pharmacokinetics and pharmacodynamics drug parameters ; 2) medications’ time to benefit may exceed predicted life-expectancy and 3) changed care-goals . A questionnaire study found that 73% of physicians agreed with the statement that patients who are in the last phase of life use too many medications . Despite this consensus among healthcare professionals, there seems to be a passivity towards the reduction of IMs . A systematic review of qualitative research suggested that limited consultation time, fragmented care among multiple prescribers, and ambiguous of changing care goals add to the clinical complexity that prescribers are faced with . Geijteman et al. also found that physicians did not consider withdrawal of certain drugs because of limited awareness, low priority, and uncertainty about the benefits and harms of continuing or discontinuing certain medications. In addition, patients may feel that healthcare professionals are ‘giving up hope’ or that they are not receiving optimal care when medication is discontinued . Although guidance around discontinuing medications and supplements in patients with a limited life-expectancy is scarce, there is consensus that several preventive medications and supplements, such as cholesterol-lowering medications, vitamins, calcium supplements, may be seen as inappropriate in patients last year of life . General practitioners (GPs) frequently have long-term relationships with patients that are often intensified in the last phase of life . To date, little is known about (in)appropriate prescriptions in patients in the last phase of life in the general practice setting. To our knowledge, there has been only one study conducted in the home setting determined the most utilised (preventive) medication in the last week of life of 60 patients . In the Netherlands, more than one third of patients who are in need for palliative care died at home . Considering the significant size of this group and the scarcity of literature, it is important to assess the (in)appropriate medication and supplement use in these patients related to their aims in quality of life. The primary aim of this study was to assess the prescription, continuation and discontinuation of four preventive, and inappropriate, medication – cholesterol-lowering medication, vitamins, calcium supplements, and bisphosphonates – in adults with a LLI during the last year of life in the home-setting. The secondary aims were to assess reasons for discontinuing these drugs as documented in the general practitioners’ patient file and whether these reasons affected the time between medication discontinuation and death.
Study design and population We performed a retrospective cohort study of general practitioners’ patients’ files. Data for this retrospective cohort were obtained from the Julius General Practitioners’ Network (JGPN) database, which contains routine primary care data extracted from electronic primary care settings in and around the area of Utrecht . Patients were eligible for inclusion if they 1) were aged 18 years or older, 2) deceased in 2019 and 3) were diagnosed with a LLI at least one year before death. A LLI was defined as an illness which often causes a patient to receive palliative instead of curative care and will eventually lead to the patient’s death. Appendix lists the diagnoses classified as life-limiting . Data collection Data consisted of patient characteristics, clinical notes of GP consultations, coded diagnoses, and medication history over the last two years of life. Patient characteristics, which were used to describe the study population, included: date of birth, date of death, and gender. Registered diagnoses were classified according to ICPC (International Classification of Primary Care). In case of multiple coexisting LLIs, the patient was categorised according to the longest existing illness. Some of the LLI’s were misclassified in the GP’s file (e.g. when it was documented under the ICPC-code for the primary health issue rather than the definitive diagnosis), and for some diagnoses, such as frailty, there was no ICPC-code. Because of this fact, the files of patients who did not have a LLI based on ICPC-codes were screened manually for a LLI (AA, FB). Patients’ files were also screened manually for reasons for discontinuing medication around the stop date (AA). A supplement for the data collection is included in appendix . Outcomes We defined having a LLI as having at least one ICPC-code linked to a life-limiting diagnosis registered in the patients’ file or when a LLI was established when reviewing the patients’ files manually. Our primary outcome was the use of four different groups of preventive medications and supplements in the last year of life: cholesterol-lowering medications, vitamins, calcium supplements, and bisphosphonates . These four medications were evaluated because these medication become inappropriate in the last year of life. Thereby, these medications are often prescribed and thus used by a large group of patients. Fibrates, even though they are cholesterol-lowering, were not included in the analyses since they are not a first-line treatment in the Netherlands and therefore hardly ever prescribed. Corresponding Anatomical Therapeutic Chemical (ATC) classification codes are listed in appendix . Medication use in the last year of life was defined as having at least one prescription in the last year of life regardless of stop date. The medication that was first prescribed in the last year of life and was not a repeat prescription was referred to as ‘started with medication’. ‘Stopped with medication’ was defined as medication that had a stop date in the last 365 days before death. All drugs within one medication group had to be discontinued to be listed as ‘stopped with medication’. No distinction was made between different dosages and combinations of medications in one tablet (e.g., cholesterol-lowering simvastatin and ezetimibe). Data analysis Descriptive analyses were used to describe the study population and the frequency of starting, using, and discontinuing medication in the last year of life. All analyses were conducted using IBM SPSS Statistics 26 (IBM Corporation, 2019). Patients’ files were screened manually for reasons for discontinuing medication (AA). Criteria for classifying the reasons for discontinuation is added in appendix . Ethics This research was reviewed by the Medical Ethics Committee (METC) NedMec and not considered to the Medical Research Involving Human Subjects Act of the Netherlands (Dutch: WMO) (21–498). All participating GPs in the JPGN adequately inform their patients about the use of their medical records for research purposes through flyers and/or information on their website. Patients may opt out, and their routine care data will not be used for the JPGN database. Meaning, that patients do not opt-out for this specific study but for all studies that used the JPGN database.
We performed a retrospective cohort study of general practitioners’ patients’ files. Data for this retrospective cohort were obtained from the Julius General Practitioners’ Network (JGPN) database, which contains routine primary care data extracted from electronic primary care settings in and around the area of Utrecht . Patients were eligible for inclusion if they 1) were aged 18 years or older, 2) deceased in 2019 and 3) were diagnosed with a LLI at least one year before death. A LLI was defined as an illness which often causes a patient to receive palliative instead of curative care and will eventually lead to the patient’s death. Appendix lists the diagnoses classified as life-limiting .
Data consisted of patient characteristics, clinical notes of GP consultations, coded diagnoses, and medication history over the last two years of life. Patient characteristics, which were used to describe the study population, included: date of birth, date of death, and gender. Registered diagnoses were classified according to ICPC (International Classification of Primary Care). In case of multiple coexisting LLIs, the patient was categorised according to the longest existing illness. Some of the LLI’s were misclassified in the GP’s file (e.g. when it was documented under the ICPC-code for the primary health issue rather than the definitive diagnosis), and for some diagnoses, such as frailty, there was no ICPC-code. Because of this fact, the files of patients who did not have a LLI based on ICPC-codes were screened manually for a LLI (AA, FB). Patients’ files were also screened manually for reasons for discontinuing medication around the stop date (AA). A supplement for the data collection is included in appendix .
We defined having a LLI as having at least one ICPC-code linked to a life-limiting diagnosis registered in the patients’ file or when a LLI was established when reviewing the patients’ files manually. Our primary outcome was the use of four different groups of preventive medications and supplements in the last year of life: cholesterol-lowering medications, vitamins, calcium supplements, and bisphosphonates . These four medications were evaluated because these medication become inappropriate in the last year of life. Thereby, these medications are often prescribed and thus used by a large group of patients. Fibrates, even though they are cholesterol-lowering, were not included in the analyses since they are not a first-line treatment in the Netherlands and therefore hardly ever prescribed. Corresponding Anatomical Therapeutic Chemical (ATC) classification codes are listed in appendix . Medication use in the last year of life was defined as having at least one prescription in the last year of life regardless of stop date. The medication that was first prescribed in the last year of life and was not a repeat prescription was referred to as ‘started with medication’. ‘Stopped with medication’ was defined as medication that had a stop date in the last 365 days before death. All drugs within one medication group had to be discontinued to be listed as ‘stopped with medication’. No distinction was made between different dosages and combinations of medications in one tablet (e.g., cholesterol-lowering simvastatin and ezetimibe).
Descriptive analyses were used to describe the study population and the frequency of starting, using, and discontinuing medication in the last year of life. All analyses were conducted using IBM SPSS Statistics 26 (IBM Corporation, 2019). Patients’ files were screened manually for reasons for discontinuing medication (AA). Criteria for classifying the reasons for discontinuation is added in appendix .
This research was reviewed by the Medical Ethics Committee (METC) NedMec and not considered to the Medical Research Involving Human Subjects Act of the Netherlands (Dutch: WMO) (21–498). All participating GPs in the JPGN adequately inform their patients about the use of their medical records for research purposes through flyers and/or information on their website. Patients may opt out, and their routine care data will not be used for the JPGN database. Meaning, that patients do not opt-out for this specific study but for all studies that used the JPGN database.
Study population A total of 1281 patients included in the JPGN database died in 2019. Of those, 666 met the eligibility criteria for this study (Fig. ). Table displays the patient characteristics. Of the included patients, 334 (50%) were females. Age at time of death ranged from 32 to 106 years, with a median age of 82 [IQR: 74—89]. As shown in Table , the most frequent LLIs were cancer, followed by chronic obstructive pulmonary disease (COPD), frailty, and congestive heart failure. Medication and supplement use The four different groups of preventive medication and supplements that are relevant in this study consisted of a total of 32 different drugs. Table shows the use of these medication subgroups over the last year of life. 458 of the 666 patients (69%) used at least one of these preventive drugs in the last year of life. Vitamins were used by 36% of the patients, followed by 35% of the patients who used cholesterol-lowering medication, 24% used calcium supplements and 9% used bisphosphonates. Cholesterol-lowering medication was stopped by 110 patients (48%) and bisphosphonates was stopped in 42 patients (70%) with a median time between discontinuation and death of 119 days. In the last year of life vitamins were started in 10% of the patients, calcium supplements by 7%, cholesterol-lowering medication by 3% and bisphosphonates by 2%, respectively. Reasons for discontinuation In Table the reasons for discontinuation are summarized per group of preventive medication and supplement. The median time between discontinuation and death was longest in case of side effects (e.g., myalgia, gastrointestinal complaints) or in the context of a medication review, with 312 and 71 days, respectively. The shortest median time between discontinuation and death was when the patient was unable to take the medication (e.g., dysphagia, nausea) or when the patient was undoubtedly in the terminal stage, with a median time of 2 and 7 days, respectively.
A total of 1281 patients included in the JPGN database died in 2019. Of those, 666 met the eligibility criteria for this study (Fig. ). Table displays the patient characteristics. Of the included patients, 334 (50%) were females. Age at time of death ranged from 32 to 106 years, with a median age of 82 [IQR: 74—89]. As shown in Table , the most frequent LLIs were cancer, followed by chronic obstructive pulmonary disease (COPD), frailty, and congestive heart failure.
The four different groups of preventive medication and supplements that are relevant in this study consisted of a total of 32 different drugs. Table shows the use of these medication subgroups over the last year of life. 458 of the 666 patients (69%) used at least one of these preventive drugs in the last year of life. Vitamins were used by 36% of the patients, followed by 35% of the patients who used cholesterol-lowering medication, 24% used calcium supplements and 9% used bisphosphonates. Cholesterol-lowering medication was stopped by 110 patients (48%) and bisphosphonates was stopped in 42 patients (70%) with a median time between discontinuation and death of 119 days. In the last year of life vitamins were started in 10% of the patients, calcium supplements by 7%, cholesterol-lowering medication by 3% and bisphosphonates by 2%, respectively.
In Table the reasons for discontinuation are summarized per group of preventive medication and supplement. The median time between discontinuation and death was longest in case of side effects (e.g., myalgia, gastrointestinal complaints) or in the context of a medication review, with 312 and 71 days, respectively. The shortest median time between discontinuation and death was when the patient was unable to take the medication (e.g., dysphagia, nausea) or when the patient was undoubtedly in the terminal stage, with a median time of 2 and 7 days, respectively.
Medication and supplement use This study describes preventive, inappropriate medication use during the last year of life in patients with a LLI in the home setting. It shows that of the 666 patients included ranging from 32 to 106 years of age, 458 (69%) used at least one of the preventive drugs in the last year of life. Ranging from 60 patients (9%) using bisphosphonates to 243 (36%) using vitamins. Earlier research about medication use in 180 patients in the last week of life in patients in the homecare setting in the Netherlands found a similar percentage of vitamin users . On the contrary, the use of cholesterol-lowering medication and calcium supplements differed. We found that 35% of patients used cholesterol-lowering medication in their last year of life, as opposed to Arevallo et al. findings that 3.2% of patients used it in their final week. In addition, 24% of patients used calcium supplements in their last year of life compared to 6.5% in their last week of life. A Swedish study assessed medication use in patients with cancer aged 65 years and older in the 12 th month before death. In which 21,5% of patients using cholesterol-lowering medication, 8.2% using vitamins, 10.5% calcium supplements, and 4.2% using bisphosphonates . Cholesterol-lowering medications and vitamins were discontinued by the smallest proportion of patients. Given that in literature cholesterol-lowering medication is the most widely accepted medication to discontinue in the palliative setting, it is remarkable that relatively few patients stopped using it . Despite the clear consensus regarding cholesterol-lowering medication, there seems to be a discrepancy between literature and practice. Efforts have been made to reach similar consensus regarding other preventive medications. Delphi studies, for instance, have resulted provided some guidance for discontinuing medication in the palliative care setting . Despite this guidance, we found that a substantial number of patients continue to use IMs in their last year of life. This may be due to limited awareness among health care providers and the clinical complexity that they are faced with when providing palliative care. The preventive drugs and supplements in this study are often appropriate in the general population, however, in the context of a palliative setting, these can become IMs . The continuation of these medications and supplements is possibly not consistent with the goal of providing optimal palliative care and obtaining patient’s wishes in quality of life. Therefore, it is important to have a critical discussion about the use of medication with the patient and informal caregivers/loved ones, in which it can be determined whether preventive medication and supplement use is in line with the patients’ goals . Reasons for discontinuation In the documented reasons for discontinuation, we found a difference between a reactive and a proactive approach to discontinuing medication. In a reactive approach medication are discontinued as a reaction on an issue or problem. In a proactive approach, such issues or problems may be prevented. A medication review is a proactive approach of discontinuing IM whereas the other four documented reasons (1) when the terminal stage was undoubtedly reached 2) patient’s own initiative, 3) inability to take medication and 4) side effects) have a more reactive nature. The reactive reasons for discontinuation have the shortest median time between stopping and death, except for cessation due to side effects. When the patient took the initiative to discontinue, the median time was 48 days, 7 days when the patient was undoubtedly in the terminal stage, and 2 days when the patient was unable to take medication. When medication was proactively discontinued during a medication review, it occurred earlier in the palliative phase, with a median duration between discontinuing and death of 71 days. It was striking that proactive discontinuation during medication reviews showed a notable earlier cessation of medication and supplements in comparison to other reasons for discontinuing. This proactive intervention possibly contributes to timely discontinuation of IMs with positive consequences for quality of life. A systematic review and meta-analysis, including RCTs and non-randomized trials, compared deprescribing interventions such as medication reviews, to usual care among community-dwelling older adults. They found a significant reduction of the use of IMs when a medication review was conducted . These findings underline the possible added benefit of a timely medication review in both patients with LLI as well as community-dwelling older adults. Strengths This study consists of a large study population of 666 patients, which makes the results broadly generalizable. It differs on three significant points from similar studies concerning medication use in palliative care and thus contributes to the existing literature. Firstly, previous studies were often conducted in a hospital or hospice setting. Given the fact that a substantial number of the patients who are eligible for palliative care die at home, it is of added value that this study provides data on these patients . Secondly, this study included all adult patients eligible for palliative care, regardless of LLI or age, while other studies focused on a specific diagnosis exclusively on older patients. Lastly, in contrast to other studies, this study assesses medication and supplement use over the entire last year of life instead of the last week(s) of life. Limitations Nevertheless, our study has some limitations that need to be acknowledged. First, the retrospective nature of this study restricts us to the information that was documented. For instance, in some cases the date registered in the system as the stop date did not reflect the date on which the patient actually ceased using the medication. Thus, we cannot rule out the possibility that medication has been discontinued in practice without it being registered. The registered date of death may differ slightly form the actual date that the patient died. However, considering that this study covers a period of a year, this presumably has a limited effect. Additionally, in only a minority of patients’ files the reasons for discontinuing medication were noted. In retrospect, this might make the size of the included patient population a limitation. Second, patients admitted to a hospice or hospital for a relevant amount of time were not excluded from the analysis. Information about medication use was not available for analysis while admitted. Same goes for information about over-the-counter use of vitamins and calcium. For both, this possibly means that some medication changes were not included in the results. Third, manual review of patients’ file could have led to some bias. Nonetheless, this is expected to have a minimal effect on our findings. Lastly, due to the pseudo-anonymised nature of the database, the palliative knowledge of the prescribers is unknown.
This study describes preventive, inappropriate medication use during the last year of life in patients with a LLI in the home setting. It shows that of the 666 patients included ranging from 32 to 106 years of age, 458 (69%) used at least one of the preventive drugs in the last year of life. Ranging from 60 patients (9%) using bisphosphonates to 243 (36%) using vitamins. Earlier research about medication use in 180 patients in the last week of life in patients in the homecare setting in the Netherlands found a similar percentage of vitamin users . On the contrary, the use of cholesterol-lowering medication and calcium supplements differed. We found that 35% of patients used cholesterol-lowering medication in their last year of life, as opposed to Arevallo et al. findings that 3.2% of patients used it in their final week. In addition, 24% of patients used calcium supplements in their last year of life compared to 6.5% in their last week of life. A Swedish study assessed medication use in patients with cancer aged 65 years and older in the 12 th month before death. In which 21,5% of patients using cholesterol-lowering medication, 8.2% using vitamins, 10.5% calcium supplements, and 4.2% using bisphosphonates . Cholesterol-lowering medications and vitamins were discontinued by the smallest proportion of patients. Given that in literature cholesterol-lowering medication is the most widely accepted medication to discontinue in the palliative setting, it is remarkable that relatively few patients stopped using it . Despite the clear consensus regarding cholesterol-lowering medication, there seems to be a discrepancy between literature and practice. Efforts have been made to reach similar consensus regarding other preventive medications. Delphi studies, for instance, have resulted provided some guidance for discontinuing medication in the palliative care setting . Despite this guidance, we found that a substantial number of patients continue to use IMs in their last year of life. This may be due to limited awareness among health care providers and the clinical complexity that they are faced with when providing palliative care. The preventive drugs and supplements in this study are often appropriate in the general population, however, in the context of a palliative setting, these can become IMs . The continuation of these medications and supplements is possibly not consistent with the goal of providing optimal palliative care and obtaining patient’s wishes in quality of life. Therefore, it is important to have a critical discussion about the use of medication with the patient and informal caregivers/loved ones, in which it can be determined whether preventive medication and supplement use is in line with the patients’ goals .
In the documented reasons for discontinuation, we found a difference between a reactive and a proactive approach to discontinuing medication. In a reactive approach medication are discontinued as a reaction on an issue or problem. In a proactive approach, such issues or problems may be prevented. A medication review is a proactive approach of discontinuing IM whereas the other four documented reasons (1) when the terminal stage was undoubtedly reached 2) patient’s own initiative, 3) inability to take medication and 4) side effects) have a more reactive nature. The reactive reasons for discontinuation have the shortest median time between stopping and death, except for cessation due to side effects. When the patient took the initiative to discontinue, the median time was 48 days, 7 days when the patient was undoubtedly in the terminal stage, and 2 days when the patient was unable to take medication. When medication was proactively discontinued during a medication review, it occurred earlier in the palliative phase, with a median duration between discontinuing and death of 71 days. It was striking that proactive discontinuation during medication reviews showed a notable earlier cessation of medication and supplements in comparison to other reasons for discontinuing. This proactive intervention possibly contributes to timely discontinuation of IMs with positive consequences for quality of life. A systematic review and meta-analysis, including RCTs and non-randomized trials, compared deprescribing interventions such as medication reviews, to usual care among community-dwelling older adults. They found a significant reduction of the use of IMs when a medication review was conducted . These findings underline the possible added benefit of a timely medication review in both patients with LLI as well as community-dwelling older adults.
This study consists of a large study population of 666 patients, which makes the results broadly generalizable. It differs on three significant points from similar studies concerning medication use in palliative care and thus contributes to the existing literature. Firstly, previous studies were often conducted in a hospital or hospice setting. Given the fact that a substantial number of the patients who are eligible for palliative care die at home, it is of added value that this study provides data on these patients . Secondly, this study included all adult patients eligible for palliative care, regardless of LLI or age, while other studies focused on a specific diagnosis exclusively on older patients. Lastly, in contrast to other studies, this study assesses medication and supplement use over the entire last year of life instead of the last week(s) of life.
Nevertheless, our study has some limitations that need to be acknowledged. First, the retrospective nature of this study restricts us to the information that was documented. For instance, in some cases the date registered in the system as the stop date did not reflect the date on which the patient actually ceased using the medication. Thus, we cannot rule out the possibility that medication has been discontinued in practice without it being registered. The registered date of death may differ slightly form the actual date that the patient died. However, considering that this study covers a period of a year, this presumably has a limited effect. Additionally, in only a minority of patients’ files the reasons for discontinuing medication were noted. In retrospect, this might make the size of the included patient population a limitation. Second, patients admitted to a hospice or hospital for a relevant amount of time were not excluded from the analysis. Information about medication use was not available for analysis while admitted. Same goes for information about over-the-counter use of vitamins and calcium. For both, this possibly means that some medication changes were not included in the results. Third, manual review of patients’ file could have led to some bias. Nonetheless, this is expected to have a minimal effect on our findings. Lastly, due to the pseudo-anonymised nature of the database, the palliative knowledge of the prescribers is unknown.
The number of patients with a LLI that use, continue and/or start with IMs during the last year of life suggests that there is room for improvement in optimising relevant medication in the palliative phase. The appropriateness of IMs are context and patient dependent. When the setting changes from curative to palliative, the appropriateness of a preventive medication and supplements asks for a shift related to quality of life as well. As such, it is of added value to provide proactive patient centred care which includes an advanced care planning process with the patient. Such conversations should include the critical consideration of whether each medication is still in line with the patients’ needs, wishes, and values. The aim of this proactive approach in a collaboration between GP, homecare nurses, patients, and caregivers is to contribute to the central goal of palliative care which is to optimise the quality of life.
Additional file 1: Appendix 1. Life-limiting illnesses. Appendix 2. Data-collection. Appendix 3. ATC-codes.
|
The Lausanne Hospitality Model: a model integrating hospitality into supportive care
|
6ba389e2-f58f-467b-83f5-115cdb55bc1d
|
10105661
|
Patient-Centered Care[mh]
|
Cancer prevalence and incidence have increased in most countries over the last decades. For many affected people living in countries with high Human Development Index, cancer has become a long-term condition due to more effective screening, diagnosis, and treatments, leading to increased survivorship. In Switzerland, more than 40,000 new cases of cancer are diagnosed each year , with estimates of growing prevalence in the next years and relative 10-year survival rates above 50% . The periods of cancer diagnosis, treatments, and follow-up are often burdensome, in addition to the physical, emotional, social, functional, and financial consequences of cancer that affect patients’ quality of life . Thus, the question is not only whether patients survive a cancer diagnosis, but how (well) they survive. Age, culture, economic status, profession, place of living, and family situation are only some of the dimensions affected by or influencing the subjective experience of people affected by cancer. This contributes to a growing recognition that offering standardized care to patients and their informal caregivers is not adapted to their individual supportive care needs. Since the late 1960s, there has been a conceptual shift in the administration of care that places a central focus on “understanding the patient as a unique human being” . Since then, patient-centered care has encountered growing recognition as a fundamental predictor of healthcare quality and patient safety . Patient-centered care is defined as care that responds to patients’ physical, emotional, social, and cultural needs, where interactions with health professionals are compassionate and empowering, and where patients’ values and preferences are taken into account . Patient-centered care is also one of the six core dimensions of quality of care according to the widely used framework developed by the Organisation for Economic Co-operation and Development (OECD) . The shift to patient-centered care has led to the development of patient-reported experience measures (PREMs), which are measures of patients’ perception of their experience of care that can be used to evaluate the quality and patient-centeredness of care delivery. PREMs encompass the range of interactions that patients have with the health system relating to their (i) satisfaction (e.g., with information given by nurses and doctors); (ii) subjective experiences (e.g., staff helped with pain); (iii) objective experiences (e.g., waiting time before appointment); and (iv) observations of healthcare providers’ behavior (e.g., whether or not a patient was given discharge information). Various conceptual frameworks with dimensions of patient experiences have been developed to facilitate and standardize the use of PREMs . Most frameworks incorporate the following eight dimensions of patient-centered care : (1) respect for patients’ values, preferences and needs; (2) information, communication, and education; (3) coordination of care; (4) physical comfort; (5) emotional support; (6) involvement of family and friends; (7) continuity and transition between healthcare settings; and (8) access to care. These measures are usually collected through patient surveys and collected data are used to identify areas with lower patient experience scores to inform quality improvement initiatives . The recognition of the patient’s experience as a determining factor in their treatment is an essential point in supportive care. To achieve person-centered cancer care that goes beyond personalized treatment, a framework for targeted and tailored supportive care was developed from a task force created by the Ontario Cancer Treatment and Research Foundation , and has been adapted in many countries since then. The guideline defines supportive care in oncology as “the provision of the necessary services for those living with or affected by cancer to meet their physical, emotional, social, psychological, informational, spiritual and practical needs during the diagnostic, treatment, and follow-up phases, encompassing issues of survivorship, palliative care and bereavement” (p. 11). In the present paper, we present an extension of the supportive care framework by drawing on contributions from the hotel industry and, more broadly, from the hospitality industry, in the way of conceiving the most appropriate services offered to patients in care settings. Brotherton defines hospitality as “contemporaneous human exchange, which is voluntarily entered into, and designed to enhance the mutual well-being of the parties concerned through the provision of accommodation and food or drink” (p. 168). Grounded on the physical environment of a service organization, relationships with guests, or customer value creation, hospitality is recognized as a strong determinant of the value of a core service, supporting features and processes to improve service personnel’s and customers’ experience . The authors, a transdisciplinary team that included a person affected by cancer, critically examined the model of supportive care and reflected on the role of hospitality services in patient experience and supportive care. The team included two nurses, one physician, one patient expert, and five researchers with expertise in supportive care in oncology, health services research, communication, and the hotel industry. Together, they developed a model through an iterative process, turning to service science to identify levers of positive experience. Building on the synergies between the supportive care framework and the hospitality perspectives coming from the hotel industry [ – ], the team developed “The Lausanne Hospitality Model: a model integrating hospitality into supportive care.” The model is pictured in Fig. and is organized in four sections: (i) the Journey section presents the key points of the patient/client trajectory; (ii) the Components of experience outline the dimensions considered in the patient/client experience in supportive care and hospitality; (iii) the Components of services present the dimensions defining service involved in supportive care and hospitality; and (iv) the last section, In practice , reviews the translation of the previous three sections into practical terms and managerial implications.
Cancer journey The Supportive Care framework organizes the cancer journey in terms of clinical steps: before the treatment (i.e., pre-diagnosis phase involving screening and assessment), during the treatment, and after the treatment during follow-up (i.e., post-treatment phase), which correspond to the traditional stages identified in the literature on the customer journey . In the service industry, the customer journey refers to “a process or sequence that a customer goes through to access or use an offering of a company” (p. 336)” . The customer journey analysis is based on identifying all the direct or indirect contacts—called touchpoints—the person has with the company and which participate in shaping the customer experience (e.g., the call to the clinic to make an appointment, pre-visit confirmation, valet parking, post-visit feedback). Touchpoints occur at various points in time, across multiple channels, whenever a customer interacts with the company—product, service, brand, advertisement, and website . In customer experience management, not all touchpoints are equivalent and service interactions may encounter more expectations when the main offer is a service . Indeed, from both a theoretical and a practical point of view, identifying key touchpoints will help designing a successful customer experience. Thus, according to Duncan and Moriarty , managing the customer journey and its different touchpoints will significantly impact the consumer’s relationship with the organization, the brand, or the service. There are considerable differences between a patient journey within which free choice is not possible or limited by a disease, and a customer journey that is usually characterized by free choice. Nevertheless, adopting a customer experience approach in patient experience shows the advantage of bringing multiple touchpoints into the patient experience considerations, which introduces multiple stakeholders . The hotel business is based on a process of continuous information gathering, carried out at each visit and each touchpoint, which allows to generate a personalized experience and adapts offers and services at each stay . The aim is to improve, stay after stay, the quality of the service and the care of the customer/patient, to maintain consistency in all the stays, in terms of service and interaction with the institution. Thus, considering patient’s experience from the first touchpoint, even before the first visit at the clinic or hospital, makes it possible not only to pay attention to the quality of service since the very first contact, but also to identify patient’s needs to improve the care experience of the following steps. By gathering information about patients’ needs, preferences, and other relevant personalized dimensions, we can improve their care experience as their journey progresses, and ensure they feel a sense of continuity in their treatment. Components of experience The model suggests here to extend patient experience to human experience, and consider the totality of environmental aspects that are actual levers to improve a hospital visit experience. Access to information and the way it is provided can be crucial for the person’s experience. Facing cancer involves dealing with unexpected new life challenges and possibly unmet needs, impacting physical and psychological symptom burden in cancer patients . Care responding to patient’s needs leads to better experiences of care for patients, which have been shown to be associated with higher levels of adherence to treatment processes, better clinical outcomes, better patient safety, and less healthcare utilization . Based on the growing body of literature about patient experience, Wolf et al. identified the following recurrent aspects within the various existing (and inconsistent) definitions of patient experience: “emotional and physical lived experiences, personal interactions, spanning across the [care] continuum, shaped by the organization/culture, and importance of partnership/patients involvement.” These aspects were integrated by the Beryl’s Institute in their definition of patient experience as “the sum of all interactions, shaped by an organization’s culture that influence patient perceptions, across the continuum of care” . The supportive care framework deconstructs the patient experience and targets the diverse needs that patients encounter during the cancer journey, contributing to the conceptual shift in which the patient is considered in his or her individuality. These needs may be informational (e.g., care processes, communication with caregivers, orientation, help with decision-making), emotional (e.g., fear, distress, anxiety, talking with peers, isolation), practical (e.g., transportation, child care, financial issues), spiritual (e.g., search for meaning), social (e.g., changes in roles, telling other people), psychological (e.g., anxiety disorders, self-image problems, sexual problems), and physical (e.g., pain, fatigue, weight changes). The intensity of these needs may also vary depending on the stage of the cancer journey. For instance, the need for information may be higher in the early course of illness, while the need for pain control may predominate as the cancer progresses. From a service perspective in tourism industry, Godovykh and Tasci define customer experience as “the totality of cognitive, affective, sensory, and conative responses, on a spectrum of negative to positive, evoked by all stimuli encountered in pre, during, and post phases of consumption affected by situational and brand-related factors filtered through personal differences of consumers, eventually resulting in differential outcomes related to consumers and brands” (p. 5). According to the authors, the cognitive component encompasses cognition, thoughts, educational and informative aspects, intellectual ability, rational capacity, knowledge, and memories. The emotional component—or affective —refers to affects, feelings, emotions, or mood experienced by a visitor. The sensorial component is described as sensations encountered by the customer. Finally, the conative components are related to behavior, involvement, act, and practice. If emotional and cognitive components of visitor experience overlap with patient’s needs, sensorial and conative components are supplementary and help understanding experience from a broader perspective, integrating the impact of the environment and professionals’ behavior, beyond the disease. Visitor experience has been a key consideration for service organizations, since the economic development moved from a service economy to an experience economy . When the priority shifted away from the employees’ behavior and the host’s viewpoint, considering guest or customer experience (e.g., interactions with staff, but also with the environment) became the main focus to effectively improve the hospitality of the organization . Literature in marketing research and customer experience shows how the servicescape, namely the “built environment surrounding the service,” shapes customer expectations and influences the nature of customer experience . Service setting influences the person’s interaction with the institution through various sensorial stimuli, serves as a facilitator by impacting the flow of activities, and conveys organizational culture by socializing both customers and employees . In other words, the set of physical elements makes up the customer/patient experience. Consideration of hospitality-based experience requires then to view experience as a multidimensional construct, where the touchpoints directly influence sensory, affective, relational, or cognitive dimensions throughout pre-treatment, treatment, and post-treatment phases. In light of the plural nature of the customer experience, researchers in hospitality management are multiplying the attempts of developing appropriate models showing what makes the experience and how it is organized. In the health field, the “Hospitality-oriented Patient Experience – HOPE” model developed by Hunter-Jones et al. offers a framework for considering the patient experience with a hospitality orientation. At the junction of hospitality, health, and customer experience literature, the HOPE framework offers a hospitality-based approach to healthcare delivery and customer experience management, in which patients and hospital staff work together to improve the patient experience across every touchpoint in the care journey. By distancing itself from an outdated paternalistic model of care, the HOPE framework offers a vision of the patient, who goes from being a user to a shaper of services. Main components of services The health literature emphasizes how targeted and tailored supportive care influences various dimensions of the healthcare pathway, for both patients and healthcare professionals involved. Indeed, the systematic review by Deneckere et al. shows how the organization of treatments into care pathways reduces in-hospital complications and supports interprofessional teamwork, by influencing staff knowledge, interprofessional documentation, team communication, and team relationships. Coulter et al. emphasize, for instance, that a better work environment, as well as patients who trust and respect their physicians, improve adherence, enhance self-care and result in higher ratings from patients. Bibby et al. , focusing on adolescents and young adults living with cancer, suggest a need for age-appropriate information and treatment facilities, access to emotional support services, contact with peers, and fertility information and services. The systematic review by Driessen et al. highlights the added value of combining hospital formal care (e.g., provided by doctor, nurse, hospital psychosocial caregivers) with informal care (e.g., provided by volunteers, websites, online support programs, non-hospital therapy) in supporting both patients with cancer and their families in coping with the diagnosis: it has the potential to provide emotional and informational support, be cost-effective, and increase patient satisfaction with the care provided. Additionally, a strong need for emotional support was also identified as a main psychosocial issue when people were told their cancer was incurable . Finally, the literature synthesis of Kandampully et al. indicates that although limited, literature on customer experience management shows that factors such as aesthetics, ambiance, lighting, social, services design, emotions, and customer-customer interactions have a significant impact on the customer experience in the hospitality industry. Targeting patient’s needs when facing cancer allows for the development and delivery of relevant services within the health institution. The framework of supportive care is supplemented by recommendations for practice to be used on a managerial level. Thus, several types of services are highlighted as having a benefit for cancer patients and their family members: facilitating orientation and introducing the oncological health system, providing emotional support, providing an opportunity to learn about and develop coping skills, offering regular assistance, setting up a crisis intervention program, carrying out psychotherapy sessions if necessary, giving advice regarding nutrition, reducing distress due to symptoms, and designing services to assist practical and functional matters . As specified by the authors, the challenge lies in ensuring that information is provided to patients and their relatives and that access to services is facilitated. The services mentioned in the framework of supportive care have the advantage of taking into account clinical, practical, and environmental dimensions that a health institution should address. In order to develop an instrument for measuring hospitality, Pijls et al.’s qualitative analysis of interviews with hospitality experts and customers of service organizations resulted in the subdivision of hospitality into nine experiential dimensions. The Welcome dimension refers to a warm reception and atmosphere. Feeling at ease describes feeling at home, confident, safe, and relaxed. The dimension of Empathy relates to the idea that the organization understands what guests want and need. Servitude represents the feeling that the organization genuinely wants to serve the guest. The Acknowledgment dimension involves the feeling of being taken seriously and experience contact. Autonomy refers to the level of control that a guest has over what happens. Efficiency is associated with smooth procedures and the ease of arranging what guest wants. Finally, the Entertainment dimension refers to the ability of the organization to provide options for pastime (e.g., magazine, drinks). While the healthcare approach bases its services on clinical dimensions, the hospitality approach offers a deconstruction of the service around the factors influencing the reception of these services by any consumer. The feeling of being welcomed, comfortable, heard, recognized, and entertained, with easy access to different services, but also in possession of the resources to act autonomously, are all factors that will shape the patient/client's evaluation of the service and, by extension, the quality of their experience. Thinking about services offered to cancer patients through a hospitality lens implies revisiting the very definition of what a hospital service entails. In practice The fourth and final section of the model highlights the behavior and capabilities of the organization to execute service in practice, as a determining factor of the guest/patient’s experience of hospitality. In the last two decades, hospitality has taken a key role in the health sector and several hospitals around the globe have taken inspiration from the hotel and guest services industry to redesign their environments accordingly. For instance, the Texas Children’s Hospital of Houston (USA) and the Disney Institute collaborated in the development of a tailored immersive experience program to enhance the patient and family journey . A similar approach has been adopted at Christ Hospital Health Network, in Cincinnati (USA), with the help of the Ritz-Carlton. The Mayo Clinic in Rochester (USA) built their worldwide reputation around the priority that “the needs of the patient come first:” in addition to a smartphone application and a concierge service, the clinic places great importance on the interpersonal skills of their staff members during the recruitment processes . The Centre Hospitalier Universitaire de Montréal CHUM (Canada), as well as every public hospitals in Paris (France), have adopted an SMS information system to guide the patient and inform family members about treatment follow-ups. Montefiore Hospital in New York (USA) have created a patient and customer service department to integrate hospitality features into healthcare . The Henry Ford West Bloomfield Hospital, outside Detroit (USA), offers a uniformed valet service, patient meals served on demand 24 h a day, and in-room massages . Finally, The Farrer Park Company in Singapore offers an integrated healthcare and hospitality complex . These examples are just some of the initiatives showing how the health sector has been inspired by the hotel industry to consider services in patient care. In their “Hospitality in patient experience framework,” Hunter-Jones et al. outline recommended practices for providing hospitable services, organized in pre-/during/post-visit. Before the visit, services giving access to the information have to be simple, clear, and at ease (e.g., knowing and understand the goal of the appointment, how to access the meeting point, identifying who can be contacted in case of need). During the visit, the process needs to be smooth and swift, with a direct understanding of who the interlocutors are, what is discussed or treated and what the next steps are (e.g., available interlocutors, identification of the priority contact persons, awareness, and instructions on post-visit activities). Finally, the post-visit stage also deserves care, by ensuring a correct follow-up of the experience (e.g., post-visit feedback, easy access to information provided by healthcare professionals). Working to co-creating values with customers, investigating in quality management or focusing on customer orientation are all strategies working towards a shift from service logic towards service orientation . In other words, hospitable service offerings and high-quality performance provided by employees are at the core of the emotional experience that impacts long-term customer satisfaction .
The Supportive Care framework organizes the cancer journey in terms of clinical steps: before the treatment (i.e., pre-diagnosis phase involving screening and assessment), during the treatment, and after the treatment during follow-up (i.e., post-treatment phase), which correspond to the traditional stages identified in the literature on the customer journey . In the service industry, the customer journey refers to “a process or sequence that a customer goes through to access or use an offering of a company” (p. 336)” . The customer journey analysis is based on identifying all the direct or indirect contacts—called touchpoints—the person has with the company and which participate in shaping the customer experience (e.g., the call to the clinic to make an appointment, pre-visit confirmation, valet parking, post-visit feedback). Touchpoints occur at various points in time, across multiple channels, whenever a customer interacts with the company—product, service, brand, advertisement, and website . In customer experience management, not all touchpoints are equivalent and service interactions may encounter more expectations when the main offer is a service . Indeed, from both a theoretical and a practical point of view, identifying key touchpoints will help designing a successful customer experience. Thus, according to Duncan and Moriarty , managing the customer journey and its different touchpoints will significantly impact the consumer’s relationship with the organization, the brand, or the service. There are considerable differences between a patient journey within which free choice is not possible or limited by a disease, and a customer journey that is usually characterized by free choice. Nevertheless, adopting a customer experience approach in patient experience shows the advantage of bringing multiple touchpoints into the patient experience considerations, which introduces multiple stakeholders . The hotel business is based on a process of continuous information gathering, carried out at each visit and each touchpoint, which allows to generate a personalized experience and adapts offers and services at each stay . The aim is to improve, stay after stay, the quality of the service and the care of the customer/patient, to maintain consistency in all the stays, in terms of service and interaction with the institution. Thus, considering patient’s experience from the first touchpoint, even before the first visit at the clinic or hospital, makes it possible not only to pay attention to the quality of service since the very first contact, but also to identify patient’s needs to improve the care experience of the following steps. By gathering information about patients’ needs, preferences, and other relevant personalized dimensions, we can improve their care experience as their journey progresses, and ensure they feel a sense of continuity in their treatment.
The model suggests here to extend patient experience to human experience, and consider the totality of environmental aspects that are actual levers to improve a hospital visit experience. Access to information and the way it is provided can be crucial for the person’s experience. Facing cancer involves dealing with unexpected new life challenges and possibly unmet needs, impacting physical and psychological symptom burden in cancer patients . Care responding to patient’s needs leads to better experiences of care for patients, which have been shown to be associated with higher levels of adherence to treatment processes, better clinical outcomes, better patient safety, and less healthcare utilization . Based on the growing body of literature about patient experience, Wolf et al. identified the following recurrent aspects within the various existing (and inconsistent) definitions of patient experience: “emotional and physical lived experiences, personal interactions, spanning across the [care] continuum, shaped by the organization/culture, and importance of partnership/patients involvement.” These aspects were integrated by the Beryl’s Institute in their definition of patient experience as “the sum of all interactions, shaped by an organization’s culture that influence patient perceptions, across the continuum of care” . The supportive care framework deconstructs the patient experience and targets the diverse needs that patients encounter during the cancer journey, contributing to the conceptual shift in which the patient is considered in his or her individuality. These needs may be informational (e.g., care processes, communication with caregivers, orientation, help with decision-making), emotional (e.g., fear, distress, anxiety, talking with peers, isolation), practical (e.g., transportation, child care, financial issues), spiritual (e.g., search for meaning), social (e.g., changes in roles, telling other people), psychological (e.g., anxiety disorders, self-image problems, sexual problems), and physical (e.g., pain, fatigue, weight changes). The intensity of these needs may also vary depending on the stage of the cancer journey. For instance, the need for information may be higher in the early course of illness, while the need for pain control may predominate as the cancer progresses. From a service perspective in tourism industry, Godovykh and Tasci define customer experience as “the totality of cognitive, affective, sensory, and conative responses, on a spectrum of negative to positive, evoked by all stimuli encountered in pre, during, and post phases of consumption affected by situational and brand-related factors filtered through personal differences of consumers, eventually resulting in differential outcomes related to consumers and brands” (p. 5). According to the authors, the cognitive component encompasses cognition, thoughts, educational and informative aspects, intellectual ability, rational capacity, knowledge, and memories. The emotional component—or affective —refers to affects, feelings, emotions, or mood experienced by a visitor. The sensorial component is described as sensations encountered by the customer. Finally, the conative components are related to behavior, involvement, act, and practice. If emotional and cognitive components of visitor experience overlap with patient’s needs, sensorial and conative components are supplementary and help understanding experience from a broader perspective, integrating the impact of the environment and professionals’ behavior, beyond the disease. Visitor experience has been a key consideration for service organizations, since the economic development moved from a service economy to an experience economy . When the priority shifted away from the employees’ behavior and the host’s viewpoint, considering guest or customer experience (e.g., interactions with staff, but also with the environment) became the main focus to effectively improve the hospitality of the organization . Literature in marketing research and customer experience shows how the servicescape, namely the “built environment surrounding the service,” shapes customer expectations and influences the nature of customer experience . Service setting influences the person’s interaction with the institution through various sensorial stimuli, serves as a facilitator by impacting the flow of activities, and conveys organizational culture by socializing both customers and employees . In other words, the set of physical elements makes up the customer/patient experience. Consideration of hospitality-based experience requires then to view experience as a multidimensional construct, where the touchpoints directly influence sensory, affective, relational, or cognitive dimensions throughout pre-treatment, treatment, and post-treatment phases. In light of the plural nature of the customer experience, researchers in hospitality management are multiplying the attempts of developing appropriate models showing what makes the experience and how it is organized. In the health field, the “Hospitality-oriented Patient Experience – HOPE” model developed by Hunter-Jones et al. offers a framework for considering the patient experience with a hospitality orientation. At the junction of hospitality, health, and customer experience literature, the HOPE framework offers a hospitality-based approach to healthcare delivery and customer experience management, in which patients and hospital staff work together to improve the patient experience across every touchpoint in the care journey. By distancing itself from an outdated paternalistic model of care, the HOPE framework offers a vision of the patient, who goes from being a user to a shaper of services.
The health literature emphasizes how targeted and tailored supportive care influences various dimensions of the healthcare pathway, for both patients and healthcare professionals involved. Indeed, the systematic review by Deneckere et al. shows how the organization of treatments into care pathways reduces in-hospital complications and supports interprofessional teamwork, by influencing staff knowledge, interprofessional documentation, team communication, and team relationships. Coulter et al. emphasize, for instance, that a better work environment, as well as patients who trust and respect their physicians, improve adherence, enhance self-care and result in higher ratings from patients. Bibby et al. , focusing on adolescents and young adults living with cancer, suggest a need for age-appropriate information and treatment facilities, access to emotional support services, contact with peers, and fertility information and services. The systematic review by Driessen et al. highlights the added value of combining hospital formal care (e.g., provided by doctor, nurse, hospital psychosocial caregivers) with informal care (e.g., provided by volunteers, websites, online support programs, non-hospital therapy) in supporting both patients with cancer and their families in coping with the diagnosis: it has the potential to provide emotional and informational support, be cost-effective, and increase patient satisfaction with the care provided. Additionally, a strong need for emotional support was also identified as a main psychosocial issue when people were told their cancer was incurable . Finally, the literature synthesis of Kandampully et al. indicates that although limited, literature on customer experience management shows that factors such as aesthetics, ambiance, lighting, social, services design, emotions, and customer-customer interactions have a significant impact on the customer experience in the hospitality industry. Targeting patient’s needs when facing cancer allows for the development and delivery of relevant services within the health institution. The framework of supportive care is supplemented by recommendations for practice to be used on a managerial level. Thus, several types of services are highlighted as having a benefit for cancer patients and their family members: facilitating orientation and introducing the oncological health system, providing emotional support, providing an opportunity to learn about and develop coping skills, offering regular assistance, setting up a crisis intervention program, carrying out psychotherapy sessions if necessary, giving advice regarding nutrition, reducing distress due to symptoms, and designing services to assist practical and functional matters . As specified by the authors, the challenge lies in ensuring that information is provided to patients and their relatives and that access to services is facilitated. The services mentioned in the framework of supportive care have the advantage of taking into account clinical, practical, and environmental dimensions that a health institution should address. In order to develop an instrument for measuring hospitality, Pijls et al.’s qualitative analysis of interviews with hospitality experts and customers of service organizations resulted in the subdivision of hospitality into nine experiential dimensions. The Welcome dimension refers to a warm reception and atmosphere. Feeling at ease describes feeling at home, confident, safe, and relaxed. The dimension of Empathy relates to the idea that the organization understands what guests want and need. Servitude represents the feeling that the organization genuinely wants to serve the guest. The Acknowledgment dimension involves the feeling of being taken seriously and experience contact. Autonomy refers to the level of control that a guest has over what happens. Efficiency is associated with smooth procedures and the ease of arranging what guest wants. Finally, the Entertainment dimension refers to the ability of the organization to provide options for pastime (e.g., magazine, drinks). While the healthcare approach bases its services on clinical dimensions, the hospitality approach offers a deconstruction of the service around the factors influencing the reception of these services by any consumer. The feeling of being welcomed, comfortable, heard, recognized, and entertained, with easy access to different services, but also in possession of the resources to act autonomously, are all factors that will shape the patient/client's evaluation of the service and, by extension, the quality of their experience. Thinking about services offered to cancer patients through a hospitality lens implies revisiting the very definition of what a hospital service entails.
The fourth and final section of the model highlights the behavior and capabilities of the organization to execute service in practice, as a determining factor of the guest/patient’s experience of hospitality. In the last two decades, hospitality has taken a key role in the health sector and several hospitals around the globe have taken inspiration from the hotel and guest services industry to redesign their environments accordingly. For instance, the Texas Children’s Hospital of Houston (USA) and the Disney Institute collaborated in the development of a tailored immersive experience program to enhance the patient and family journey . A similar approach has been adopted at Christ Hospital Health Network, in Cincinnati (USA), with the help of the Ritz-Carlton. The Mayo Clinic in Rochester (USA) built their worldwide reputation around the priority that “the needs of the patient come first:” in addition to a smartphone application and a concierge service, the clinic places great importance on the interpersonal skills of their staff members during the recruitment processes . The Centre Hospitalier Universitaire de Montréal CHUM (Canada), as well as every public hospitals in Paris (France), have adopted an SMS information system to guide the patient and inform family members about treatment follow-ups. Montefiore Hospital in New York (USA) have created a patient and customer service department to integrate hospitality features into healthcare . The Henry Ford West Bloomfield Hospital, outside Detroit (USA), offers a uniformed valet service, patient meals served on demand 24 h a day, and in-room massages . Finally, The Farrer Park Company in Singapore offers an integrated healthcare and hospitality complex . These examples are just some of the initiatives showing how the health sector has been inspired by the hotel industry to consider services in patient care. In their “Hospitality in patient experience framework,” Hunter-Jones et al. outline recommended practices for providing hospitable services, organized in pre-/during/post-visit. Before the visit, services giving access to the information have to be simple, clear, and at ease (e.g., knowing and understand the goal of the appointment, how to access the meeting point, identifying who can be contacted in case of need). During the visit, the process needs to be smooth and swift, with a direct understanding of who the interlocutors are, what is discussed or treated and what the next steps are (e.g., available interlocutors, identification of the priority contact persons, awareness, and instructions on post-visit activities). Finally, the post-visit stage also deserves care, by ensuring a correct follow-up of the experience (e.g., post-visit feedback, easy access to information provided by healthcare professionals). Working to co-creating values with customers, investigating in quality management or focusing on customer orientation are all strategies working towards a shift from service logic towards service orientation . In other words, hospitable service offerings and high-quality performance provided by employees are at the core of the emotional experience that impacts long-term customer satisfaction .
The aim of this paper was to broaden the understanding of supportive care, by integrating hospitality into supportive care and offering an extension to the framework developed by Fitch . The “Lausanne Hospitality Model: a model integrating hospitality into supportive care” considers components and perspectives that are usually treated independently in the literature. Integrating hospitality components into supportive care is based on the argument that expanding the range of services provided in the care journey can enhance the patient’s overall experience. The commitment to delivering high-quality informational, emotional, and practical services transcends geographical boundaries. Nonetheless, the development of this model was tailored for implementation within a particular socio-cultural environment. Before applying the model in a different country, it should be adapted to fit its cultural, socio-economic, and political environment. The definition of what constitutes a quality service, what shapes the patient experience, the level of sensitivity to the physical environment, and how to translate these services into practice would require adjustments accordingly. The model positions itself as part of the innovating analytic turn that aims at considering patient experience in its totality and complexity, considering not only the clinical characteristics of the individual, but also the extent to which environmental, organizational, emotional factors may determine his or her experience. This paper addresses previously overlooked aspects of patients’ hospital experiences by integrating a hospitality perspective into healthcare delivery and supportive care in the hospital.
|
Ambulatory pulmonary vein isolation workflow using the Perclose Proglide
|
92944fba-097b-4ee1-a6c8-6b0cb4cd0921
|
10105833
|
Suturing[mh]
|
Pulmonary vein isolation (PVI) for the treatment of atrial fibrillation (AF) is an increasingly performed procedure worldwide. The procedure workflow has been improving quickly through both the implementation of new ablative technologies and peri-procedural management, so that PVI is now routinely performed within 120 min. Although procedure time has been shortened significantly, the post-operative management of patients undergoing PVI has remained almost unchanged. Thus, the possibility to perform these procedures in a same day discharge setting presents an interesting prospect in the electrophysiology (EP) field. To date, the possibility of a rapid discharge after PVI is limited by post-procedural adverse events, driven mainly by vascular complications. , The incidence of serious complications such as femoral pseudoaneurysm, arteriovenous fistula, and retroperitoneal bleeding are approximately 1.5% and increase with the number and size of sheaths used. , In addition, the intensive peri-procedural anticoagulative regimen recommended at the time of PVI procedures aggravates the incidence of minor complication such as bleeding or haematomas. In order to reduce complications related to the venous puncture, an ultrasound approach for venous accesses has been adopted, resulting in a drastic decline in puncture-related complications. As for the post-operative management, a figure-of-eight suture and/or manual compression (MC), followed by post-operative bedrest (up to 8 h) is still the standard approach in many centres. Hence, the application of devices that can reduce the bedrest time after EP procedure is an interesting clinical opportunity. The aim of this study was to evaluate the feasibility, safety, and efficacy of using Perclose ProGlide™ suture-mediated vascular closure in percutaneous PVI procedures for the purpose of enabling a workflow without any bedrest after recovery from anesthesia. We evaluated the feasibility of this method to achieve early mobilization of patients undergoing PVI. We also investigated patients’ clinical symptoms and satisfaction. Incidence of vascular complication at Days 1, 7, and 30 following the intervention were analysed. Hospital costs compared to the standard of care (SOC) in our hospital was assessed considering direct and indirect costs.
Study design We performed an observational prospective single-centre cohort study of patients admitted for PVI with Perclose ProGlide™ system use, from January 2020 to May 2021. Data were prospectively collected; an electronic case report form (e-CRF) was promptly completed. Source data and database quality control was performed by investigators. A detailed description of the study design had been previously accepted by the local ethical committee. A Clinical Events Committee was recruited for the follow-up clinical events evaluation. Variables and definitions The primary endpoint (feasibility of an ambulatory PVI strategy) was assessed as the percentage of patients being able to be discharged the same day of the procedure. The secondary endpoints were analysed only for patients who met the primary endpoint. Acute vascular device closure performance was evaluated as the number of successful deployments out of total number of devices utilized (two for single PVI). Immediate haemostasis (< 1 min from device deployment) rate was recorded as a proportion of the total number of procedures. Post-procedural time to reach haemostasis was measured as the time from the delivery of the closure device to confirmed venous haemostasis in those patients needing further manual compression after device deployment. Time to ambulation was analysed as time from the removal of closure device to patients’ ability to walk. Time to be deemed suitable for discharge was calculated as the time from the removal of closure device to the medical assessment that deemed discharge possible. Time to discharge was considered as the effective time from the removal of closure device to patient discharge. Patient satisfaction was evaluated using the Post Ablation Procedure Patient Survey (see ). Minor and major vascular complications were calculated as the number of patients with venous access-related issues both requiring or not investigation, medical management, or surgical intervention out of the total number of patients enrolled. Inclusion/exclusion criteria All patients scheduled for elective PVI were considered eligible for study participation. All patients participating accepted to be enrolled in the study and signed the informed consent after detailed discussion. Exclusion criteria were: age <18 years, previous adverse event after vascular access resulting in prolonged hospitalization, previous vascular surgery in either leg or in the aorto-iliac axis, nonstandard ablation (i.e. need for more than two femoral punctures), known history of bleeding diathesis, coagulopathy, hypercoagulability or platelet count < 100 000 cells/mm 3 , history of deep vein thrombosis (DVT), pulmonary embolism or thrombophlebitis, significant anaemia or renal insufficiency, haemodynamic or electrical instability, body mass index (BMI) > 45 kg/m 2 or < 20 kg/m 2 , active liver disease or hepatic dysfunction, severe renal dysfunction, defined as an estimated global filtration rate (eGFR) < 30 mL/min/1.73 m 2 unless the patient is in renal support therapy. Procedure All subjects provided written informed consent prior to the PVI procedure. Patients were admitted to the hospital at the same day of the intervention. All procedures were performed under uninterrupted anticoagulation, if indicated, and heparin was administered during the procedure in order to get an activated clotting time (ACT) of 300 s or greater. No anticoagulation was given on the morning of the procedure itself. All procedures were performed under general anaesthesia. Two sheaths with 8 and 8.5 French diameter, respectively, were placed in the right common femoral vein via an US-guided approach. No local anaesthetic was given at any site during any phase of the procedure. At the end of the procedure, a Perclose Proglide TM -closure system was used for each sheath. After discharge from the anaesthesiology recovery unit, patients were kept in observation in an ambulatory recovery room, in a sedentary position without further bedrest, until discharge was deemed possible by the attending physician. A transthoracic ultrasound to exclude pericardial effusion was part of the pre-discharge assessment as per the standard of care for all patients in our institution. Endpoint The primary endpoint was the rate of patients discharged the same day of the procedure. Secondary endpoints were: (i) acute vascular device closure performance, (ii) post-procedural time to reach haemostasis, (iii) time to ambulation, (iv) time to possibility of discharge, (v) time to discharge, and (vi) patient satisfaction. Prior to discharge, all patients were asked to complete the Post Ablation Procedure Patient Survey questionnaire (see ) in which scores from zero (very unsatisfied) to 10 (very satisfied) were assigned. Level of pain, need for analgesic medications, and patient’s satisfaction were also evaluated using the ‘Post Ablation Procedure Patient Survey’ questionnaire (see ). The secondary safety endpoint was the incidence of minor and major vascular complications within 30 days after the procedure and according to the Clinical Event Committee (CEC) analysis. Analysis of costs Cost comparison considered direct and indirect costs including time and staff allocation spent in the EP lab and the ward. Procedure timings and relative costs were provided by the cardiology department of OLV Aalst. Costs related to nursing staff salaries are in accordance with the mandatory barema scales and expert opinion was sought for clinician staff costs. A detailed description of the process and methodology of the cost comparison is provided as a to the manuscript. Follow-up Patients were followed-up for a period of 30 days after the procedure. Photographs of the puncture site were collected from the patients or their caregivers at Day 1 and/or Day 7 after the procedure. Patients were instructed to inform the investigators at any time with new cases of symptoms. A CEC consisting of three independent members evaluated the occurrence of adverse events. Statistical analysis For descriptive analysis, categorical variables are presented as absolute numbers and their relative percentages, continuous variables are presented as mean ± standard deviation (SD) and/or median and interquartile range (IQR) according to normal or non-normal distribution. Baseline characteristics are presented as numbers (%) for categorical variables and as means ± standard deviation for continuous variables. Differences between groups were analysed using the Student’s t -test or the Mann–Whitney U test for continuous variables, and the χ 2 test or Fisher’s exact test for categorical variables, as appropriate. Propensity score (PS)-matching was used to reduce selection bias between the PROPVI group and the general population and to adjust for significant differences in the patients’ baseline characteristics. The propensity score was computed by a logistic regression model, and the matching was performed using the nearest neighbour method with a 1:1 ratio. Matching criteria were age, sex, BMI, hypertension, diabetes, smoking habit, peripheral arteriopathy (PAD), and creatinine clearance. Analyses were performed with R version 3.5.2 (R Foundation for Statistical Computing, Vienna, Austria). A P -value of 0.05 was considered statistically significant.
We performed an observational prospective single-centre cohort study of patients admitted for PVI with Perclose ProGlide™ system use, from January 2020 to May 2021. Data were prospectively collected; an electronic case report form (e-CRF) was promptly completed. Source data and database quality control was performed by investigators. A detailed description of the study design had been previously accepted by the local ethical committee. A Clinical Events Committee was recruited for the follow-up clinical events evaluation.
The primary endpoint (feasibility of an ambulatory PVI strategy) was assessed as the percentage of patients being able to be discharged the same day of the procedure. The secondary endpoints were analysed only for patients who met the primary endpoint. Acute vascular device closure performance was evaluated as the number of successful deployments out of total number of devices utilized (two for single PVI). Immediate haemostasis (< 1 min from device deployment) rate was recorded as a proportion of the total number of procedures. Post-procedural time to reach haemostasis was measured as the time from the delivery of the closure device to confirmed venous haemostasis in those patients needing further manual compression after device deployment. Time to ambulation was analysed as time from the removal of closure device to patients’ ability to walk. Time to be deemed suitable for discharge was calculated as the time from the removal of closure device to the medical assessment that deemed discharge possible. Time to discharge was considered as the effective time from the removal of closure device to patient discharge. Patient satisfaction was evaluated using the Post Ablation Procedure Patient Survey (see ). Minor and major vascular complications were calculated as the number of patients with venous access-related issues both requiring or not investigation, medical management, or surgical intervention out of the total number of patients enrolled.
All patients scheduled for elective PVI were considered eligible for study participation. All patients participating accepted to be enrolled in the study and signed the informed consent after detailed discussion. Exclusion criteria were: age <18 years, previous adverse event after vascular access resulting in prolonged hospitalization, previous vascular surgery in either leg or in the aorto-iliac axis, nonstandard ablation (i.e. need for more than two femoral punctures), known history of bleeding diathesis, coagulopathy, hypercoagulability or platelet count < 100 000 cells/mm 3 , history of deep vein thrombosis (DVT), pulmonary embolism or thrombophlebitis, significant anaemia or renal insufficiency, haemodynamic or electrical instability, body mass index (BMI) > 45 kg/m 2 or < 20 kg/m 2 , active liver disease or hepatic dysfunction, severe renal dysfunction, defined as an estimated global filtration rate (eGFR) < 30 mL/min/1.73 m 2 unless the patient is in renal support therapy.
All subjects provided written informed consent prior to the PVI procedure. Patients were admitted to the hospital at the same day of the intervention. All procedures were performed under uninterrupted anticoagulation, if indicated, and heparin was administered during the procedure in order to get an activated clotting time (ACT) of 300 s or greater. No anticoagulation was given on the morning of the procedure itself. All procedures were performed under general anaesthesia. Two sheaths with 8 and 8.5 French diameter, respectively, were placed in the right common femoral vein via an US-guided approach. No local anaesthetic was given at any site during any phase of the procedure. At the end of the procedure, a Perclose Proglide TM -closure system was used for each sheath. After discharge from the anaesthesiology recovery unit, patients were kept in observation in an ambulatory recovery room, in a sedentary position without further bedrest, until discharge was deemed possible by the attending physician. A transthoracic ultrasound to exclude pericardial effusion was part of the pre-discharge assessment as per the standard of care for all patients in our institution.
The primary endpoint was the rate of patients discharged the same day of the procedure. Secondary endpoints were: (i) acute vascular device closure performance, (ii) post-procedural time to reach haemostasis, (iii) time to ambulation, (iv) time to possibility of discharge, (v) time to discharge, and (vi) patient satisfaction. Prior to discharge, all patients were asked to complete the Post Ablation Procedure Patient Survey questionnaire (see ) in which scores from zero (very unsatisfied) to 10 (very satisfied) were assigned. Level of pain, need for analgesic medications, and patient’s satisfaction were also evaluated using the ‘Post Ablation Procedure Patient Survey’ questionnaire (see ). The secondary safety endpoint was the incidence of minor and major vascular complications within 30 days after the procedure and according to the Clinical Event Committee (CEC) analysis.
Cost comparison considered direct and indirect costs including time and staff allocation spent in the EP lab and the ward. Procedure timings and relative costs were provided by the cardiology department of OLV Aalst. Costs related to nursing staff salaries are in accordance with the mandatory barema scales and expert opinion was sought for clinician staff costs. A detailed description of the process and methodology of the cost comparison is provided as a to the manuscript.
Patients were followed-up for a period of 30 days after the procedure. Photographs of the puncture site were collected from the patients or their caregivers at Day 1 and/or Day 7 after the procedure. Patients were instructed to inform the investigators at any time with new cases of symptoms. A CEC consisting of three independent members evaluated the occurrence of adverse events.
For descriptive analysis, categorical variables are presented as absolute numbers and their relative percentages, continuous variables are presented as mean ± standard deviation (SD) and/or median and interquartile range (IQR) according to normal or non-normal distribution. Baseline characteristics are presented as numbers (%) for categorical variables and as means ± standard deviation for continuous variables. Differences between groups were analysed using the Student’s t -test or the Mann–Whitney U test for continuous variables, and the χ 2 test or Fisher’s exact test for categorical variables, as appropriate. Propensity score (PS)-matching was used to reduce selection bias between the PROPVI group and the general population and to adjust for significant differences in the patients’ baseline characteristics. The propensity score was computed by a logistic regression model, and the matching was performed using the nearest neighbour method with a 1:1 ratio. Matching criteria were age, sex, BMI, hypertension, diabetes, smoking habit, peripheral arteriopathy (PAD), and creatinine clearance. Analyses were performed with R version 3.5.2 (R Foundation for Statistical Computing, Vienna, Austria). A P -value of 0.05 was considered statistically significant.
A predefined number of 50 patients were enrolled ( Table ). All patients underwent PVI with radiofrequency (RF) ablation. No intra-procedural complications occurred. At the end of the procedure, two Perclose Proglide TM systems were placed—one for each vein. In total 48/50 (96%) patients were discharged on the day of the procedure ( Table ). 49 patients (98%) were deemed to be suitable for discharge, but one patient requested to prolong the admission for non-medical reasons. One patient was kept supine due to discomfort until an ultrasound evaluation was carried the day after which excluded severe complications. Successful Proglide deployment was observed in all 48 patients that met the primary endpoint (96/96 devices, 100% success rate). After the procedure, anticoagulation was reversed by protamine administration in two patients (4%). Immediate haemostasis was reached in 30 patients (60%), with 20 subjects (40%) requiring short additional manual compression. In this last group, the mean and median time to achieve haemostasis were 4 min and 34 s (± 3:27) and 3 min (2–15), respectively. During the post-operative stay, two patients (4.2%) experienced minor bleeding, stopped after additional short manual compressions. Mean and median time to ambulation was 3 h and 18 min (± 1:05) and 3 h and 11 min (1:26–6:23), respectively. The mean and median time to discharge in the 48 patients was 5 h and 48 min (± 1:03) and 5 h and 51 min (3:38–7:57) ( Figure and Table ). By using the ‘Post Ablation Procedure Patient Survey,’ patients reported high levels of satisfaction for the total duration of the supine position, as well as for discomfort and access-related pain during the post-operative supine position. The mean score assigned to ‘satisfaction about the total duration of lying position after PVI’ was 7.8/10, vs. a score of 4.7/10 that was assigned had a hypothetical supine position lasted 2–3 h longer. Similar results were observed regarding the discomfort and access-related pain during the post-operative supine position ( Figure and Table ). Post-operative pain required management in only three patients, consisting of two single doses of salicylic acid or paracetamol and one repeated administration of paracetamol. There was no need for local analgesia or long-term analgesia prescription ( Table ). All patients concluded the 30-day follow-up period. They all received phone calls at Days 1, 7, and 30. No major vascular complications as well as necessity of any invasive intervention were described in the overall population ( Table ). Post-procedural haematomas bigger than 6 cm occurred in three patients (6.25%) which spontaneously resolved during the follow-up time with no further assessment or management needed. In 15 (31%) patient,s an asymptomatic superficial haematoma <6 cm occurred, and in one patient (2.08%) a transient access site-related nerve injury was reported ( Table ). For the analysis of cost, this workflow was compared to the figure-of-eight approach that represents the standard of care in our centre. With respect to the figure-of-eight method, the use of Perclose ProGlide TM increased the overall cost of the procedure (device and staff costs included) by 259,15€ favouring figure of eight closure. However, considering the time spent in the ward after the ablation, the use of Perclose ProGlide TM reduced the time to discharge by 60 min. This reduction potentially increases the cost effectiveness of the PVI unit by improving efficiency and flow-through of patients with the potential to increase the number of procedures being carried out (for example diuretic administration for treatment of heart failure). The above staff and procedure efficiencies offset the cost of the Perclose ProGlide TM system, resulting in a neutral economic impact for the hospital for a single patient ( Figure ). From the contemporary cohort of patients undergoing PVI with standard-of-care treatment including figure of 8 suture, 166 patients were matched in a 1:1 ratio with the PROPVI study group. No differences were observed across baseline characteristics of the matched population ( Table ). Details of the propensity score matching are reported in , and . Mean time to discharge for the control cohort was 10:16 ± 1:21 h ( P < 0.0001 for comparison to the study group). Time to ambulation was not available for the control group as it does not apply to our standard figure of eight workflow.
Although the PVI procedures have been shortened significantly over the last years, the post-operative management of patients undergoing PVI has remained almost unchanged normally requiring a post-operative bedrest up to 6 h and an intra-hospital observation up to 24 h. , As a consequence, the number of PVI procedures performed is limited by the available bed capacity which was particularly affected in the era of restrictive rules applied for the COVID-19 pandemic containment. Thus, the possibility to perform PVI, and in general advanced procedures, in a day care setting, represents an interestingly prospect in the electrophysiology (EP) field, and emerging data support feasibility of same day discharge approaches using several ablation modalities. , Existing data also support a potential for cost saving driven by a reduction in hospital costs, although there are little prospective controlled data available on this topic. , The advantages of using devices for vascular closure have already been assessed in the context of arterial procedures. In particular, they were shown to reduce the rate of complication and the necessity of bedrest, shortening the total hospital stay. Despite the use of large femoral sheaths (up to 12 French) in the EP field, there is still a lack of knowledge about vascular closure devices. However, based on the low-pressure of the venous circulation, the effect of these systems can be even more advantageous than for arterial procedures. Recently, in a multicentre retrospective study investigating the usefulness of vascular closure devices after catheter ablation, a significant reduction of access-site complications, and ambulation time were observed. In another randomized multi-centre trial that assessed the use of the VASCADE MVP Venous Vascular Closure System, improvements were demonstrated with time to ambulation, total post-procedure time, time to discharge eligibility, time to haemostasis, and patient satisfaction. Our study demonstrates that an ambulatory strategy for pulmonary vein isolation procedure is feasible considering time to achieve haemostasis, time to ambulation, and time to discharge. This is of considerable importance for such increasingly rapid procedures, performed during intense anticoagulation and in an era favouring day case management. Since the majority of complications leading to prolonged hospitalization are related to the vascular approach, the use of a system able to properly close the femoral access is of critical importance to improve the femoral access management after the procedure. Furthermore, patient satisfaction with post-operative time was considerably higher than in the case of a postoperative stay longer than 2–3 h. In a time of shortage of healthcare workers and stress on hospital resources, the efficiency gain from medical technology innovations can be extremely beneficial. In our study, the Perclose ProGlide TM system, despite bringing an upfront incremental cost than the standard treatment, improves hospital efficiency and patient satisfaction. Reducing the ward stay and the staff costs, the Perclose Proglide TM closure approach offsets the cost of the Perclose ProGlide TM system, resulting in a final neutral impact for the hospital per single patient. Limitations There are several limitations in this study. The main limitation is the observational nature of the study design, making definitive comparisons with other established workflows impossible. However, to provide a basis for comparison, we performed a retrospective comparison with a propensity-matched cohort that can serve as a reference point for the time to discharge. Similarly, the analysis of costs for the comparator treatment is based on the price of the materials at the time the data were collected without setting a case control study. Also, patients were all managed with the Perclose Proglide TM closure device; thus, satisfaction analysis was based on a comparison on an only ‘virtual’ experience for patients. We have chosen to only study uncomplicated PVI procedures in a first stage, excluding patients with higher probability of access related complications. Nevertheless, the stated exclusion criteria appear uncommon in straightforward PVI as zero patients scheduled for such procedures were ineligible for the study protocol. Finally, the time to discharge in our study group is obviously the outcome of a micromanaged patient population and may very well represent the best achievable scenario. When comparing the time to discharge in the study group with the propensity-matched control group, it is important to keep in mind no particular efforts were made to optimize the time to discharge for the latter. Regarding technical aspects of our workflow, we only assessed safety and feasibility using two venous sheaths with 8F diameter. We cannot extrapolate results to workflows using three or more sheaths. In addition, a two-sheath workflow where one sheath is large bore (e.g. cryoballoon) was not tested either. The authors do report anecdotal experience supporting feasibility of a three-sheath approach, and also report feasibility of large access closure if appropriate device recommendations are followed (specifically for the study device, the use of pre-close rather than post-close and the potential consideration of two devices per access site).
There are several limitations in this study. The main limitation is the observational nature of the study design, making definitive comparisons with other established workflows impossible. However, to provide a basis for comparison, we performed a retrospective comparison with a propensity-matched cohort that can serve as a reference point for the time to discharge. Similarly, the analysis of costs for the comparator treatment is based on the price of the materials at the time the data were collected without setting a case control study. Also, patients were all managed with the Perclose Proglide TM closure device; thus, satisfaction analysis was based on a comparison on an only ‘virtual’ experience for patients. We have chosen to only study uncomplicated PVI procedures in a first stage, excluding patients with higher probability of access related complications. Nevertheless, the stated exclusion criteria appear uncommon in straightforward PVI as zero patients scheduled for such procedures were ineligible for the study protocol. Finally, the time to discharge in our study group is obviously the outcome of a micromanaged patient population and may very well represent the best achievable scenario. When comparing the time to discharge in the study group with the propensity-matched control group, it is important to keep in mind no particular efforts were made to optimize the time to discharge for the latter. Regarding technical aspects of our workflow, we only assessed safety and feasibility using two venous sheaths with 8F diameter. We cannot extrapolate results to workflows using three or more sheaths. In addition, a two-sheath workflow where one sheath is large bore (e.g. cryoballoon) was not tested either. The authors do report anecdotal experience supporting feasibility of a three-sheath approach, and also report feasibility of large access closure if appropriate device recommendations are followed (specifically for the study device, the use of pre-close rather than post-close and the potential consideration of two devices per access site).
The PROPVI trial demonstrates that the ambulatory management of PVI by using the percutaneous Perclose Proglide TM closure device is safe and effective, appropriately closing venous accesses. The use of the closure device for PVI led to safe discharge of patients within 6 h from the intervention in 96% of the population with no major complications observed in the follow-up period. The ambulatory management described in the article is useful to reduce the post-PVI recovery time leading to a significantly improved patients’ experience. The cost of the Perclose ProGlide TM devices is balanced by savings made with the reduced use of day case department and by the decreased nursing time required. Further randomized trials are needed to better demonstrate the benefits of this approach.
is available at Europace online.
euad022_Supplementary_Data Click here for additional data file.
|
Molecular pathology and clinical implications of diffuse glioma
|
448ed289-b367-4d99-b3e0-b908aa175413
|
10106158
|
Pathology[mh]
|
Diffusely infiltrating gliomas or diffuse gliomas are the most common primary tumors of the central nervous system (CNS), accounting for 30% and 80% of all primary and malignant primary CNS tumors, respectively. Currently, the prognosis of diffuse gliomas remains dismal, even after comprehensive treatments, including surgery, radiotherapy (RT) and/or chemotherapy, and tumor treating fields. More than 60% of cases of diffuse glioma are glioblastoma, the most aggressive type of CNS tumors, with a median overall survival of approximately 14 to 16 months. There has been limited progress in improving glioma outcomes over the past 15 years. This is largely attributed to the unique anatomic location, biological characteristics, developmental, genetic, epigenetic, and microenvironmental features of gliomas that render them resistant to conventional and novel treatments. Additionally, the traditional classification of diffuse gliomas only by histological features cannot provide enough information for clinicians to have a better understanding of the prognosis and optimal therapy for patients with specific subgroups of gliomas. However, the rapid development of molecular pathology brings new hope for improving prognosis and consequently the outcome of gliomas. Since the 2016 World Health Organization (WHO) classification of the CNS tumors (WHO CNS2016), the diagnosis of diffuse gliomas is determined by both molecular and pathological features, implying that glioma diagnoses should be structured in the molecular era. Both mutations in isocitrate dehydrogenase gene ( IDH ) and the chromosomal 1p/19q codeletion have been integrated with morphologic observations to determine the final diagnosis of diffuse gliomas. Additional molecular variations and their clinical relevancies are being continuously discovered, accompanied with an expansion of knowledge on the genetic basis of tumorigenesis. Accumulating evidence has shown that more molecular features can contribute to a more accurate diagnosis and risk stratification of gliomas. Based on these findings, especially the recommendations of the Consortium to Inform Molecular and Practical Approaches to CNS tumor classification, the summary of the fifth edition of the WHO Classification of CNS Tumors, published in 2021 (WHO CNS2021), advanced the role of molecular pathology in CNS tumor classification. In particular, WHO CNS2021 classifies gliomas into more biologically and molecularly defined types/subtypes, which thus provide new opportunities to improve the management of gliomas, clinical trial design, and evaluation of new therapies. In this review, we highlight the major advances of molecular pathology in WHO CNS2021, with a particular focus on their applications in clinical practices rather than providing an exhaustive review of each molecular marker. In addition, we summarize the potential implications of molecular pathology advances for the therapy of gliomas and discuss current challenges and future development directions. The substantial changes incorporated in WHO CNS2021 are advancing the role of molecular diagnostics in CNS tumor classification, which remain rooted in histological and immunohistochemistry analyses. The key molecular features, which are important for the integrated classifications of gliomas, are summarized in Table . Among these molecular features, some are readily and consistently used for the classification or grading of tumors, whereas others are not required but support tumor classification. In WHO CNS2021, the term “type” is used instead of “entity,” and “subtype” is used instead of “variant.” Grading is also considered within tumor types, while modifier terms, such as “anaplastic” are excluded for the diagnosis of gliomas. Although glioma grades were traditionally written in Roman numerals, WHO CNS2021 changed them to Arabic numerals. WHO CNS2021 reclassified diffuse gliomas in WHO CNS2016 according to their similarities in molecular/genetic features and divided them into three different families: (1) adult-type diffuse gliomas, which are the majority of primary brain tumors in adults; (2) pediatric-type diffuse low-grade gliomas, which are expected to have good prognoses; and (3) pediatric-type diffuse high-grade gliomas, which are expected to be aggressive. A number of molecular markers, such as CDKN2A/2B homozygous deletion, EGFR amplification, TERT promoter mutation, and the combined whole chromosome 7 gain and whole chromosome 10 loss (+7/−10), have contributed to the classification and grading of gliomas in WHO CNS2021. Currently, all IDH -mutant diffuse astrocytic tumors are considered a single type (astrocytoma, IDH -mutant), which could be further classified into WHO grade 2, 3, or 4 according to both their histological and CDKN2A/B homozygous deletion status, as recommended in the the Consortium to Inform Molecular and Practical Approaches to CNS tumor classification -not official WHO (cIMPACT-NOW) update. IDH -wildtype diffuse astrocytic gliomas in adults are diagnosed as glioblastomas with IDH -wildtype if there is microvascular proliferation or necrosis, or the presence of one or more of three genetic parameters ( TERT promoter mutation, EGFR gene amplification, +7/−10) according to the cIMPACT-NOW updates 3 and 6. Here, we provide a comprehensive overview on the major changes in the diagnosis of gliomas according to the latest edition of WHO classification compared with WHO CNS 2016 [Table ]. Pediatric-type diffuse low-grade gliomas are often characterized by the presence of genetic alterations, such as BRAF V600E mutation, FGFR1 alteration, MYB or MYBL1 rearrangement, or other MAPK pathway alterations. Their classification on the following types: “diffuse astrocytoma, MYB - or MYBL1 -altered; angiocentric glioma; polymorphous low-grade neuroepithelial tumor of the young; and diffuse low-grade glioma MAPK pathway-altered,” is based on both morphological characteristics and genetic features of these tumors. Pediatric-type diffuse high-grade gliomas also include the following four types: “diffuse midline glioma with H3 K27-altered; diffuse hemispheric glioma with H3G34-mutant; diffuse pediatric-type high-grade glioma with H3 -wildtype and IDH -wildtype; and infant-type hemispheric glioma.” Diffuse midline gliomas with H3 K27-altered involve thalamic, spinal, and diffuse brainstem gliomas, which usually occur in children but rarely in adults, in which H3 K27-altered is characterized by K27M mutations in either H3F3A or HIST1H3B/C , and other changes, such as overexpression of the EZHIP protein or EGFR mutations. Recently, it has been reported that adult midline gliomas with H3 K27-altered tumors have distinct molecular features with that of child patients, including a higher proportion of localization in the tumulus or spinal cord, with longer survival. Diffuse pediatric-type high-grade gliomas with H3 -wildtype and IDH -wildtype are wildtype for both H3 and IDH gene families and require the integration of histopathological and molecular data, such as mutational and methylome data, for final diagnosis. Infant type hemispheric gliomas are novel high-grade gliomas that occur in newborns (commonly <4 years old) and are characterized by the fusion of ALK , ROS1 , NTRK1/2/3 , or MET genes. WHO CNS2021 does not recommend specific methods for the molecular diagnostic assessment of individual genetic alterations. With the increasing use of molecular markers in the diagnosis of gliomas, challenges have arisen regarding the methodology of molecular testing for gliomas; and the same is true for performing integrated diagnostics. The traditional technologies used in pathological diagnosis, including light microscopy, histochemical stains, electron microscopy, immunohistochemistry, and DNA fluorescence in situ hybridization (FISH) cannot fulfill the requirements for the diagnosis of gliomas. A variety of nucleic acid detection methods, such as DNA/RNA sequencing and RNA expression profiles, have clearly shown their contribution to the diagnosis and classification of gliomas. However, the means by which to properly incorporate these novel methodologies into routine molecular testing of formalin-fixed paraffin-embedded samples remains a challenge. These challenges include: (1) the availability and choice of high-throughput DNA/RNA sequencing methods; (2) a cost- and time-effective workflow; (3) intensive communication and collaboration among people with different academic backgrounds (e.g., pathologists, molecular biologists, and bioinformaticians); (4) comparability of test results between different testing centers; and (5) security of human genetic data. The implementation of combined phenotypic-genotypic diagnostics in some large canters has suggested that most of these challenges can be readily overcome in the near future. Here, we also provide a roadmap for the diagnosis of gliomas according to the experience in our institute [Figure ]. Although WHO CNS2021 is likely only an intermediate stage to an even more precise classification in the future, it has the potential to enable clinicians to have a better understanding of the prognosis and optimal therapy for patients with specific gliomas. Several recent studies have revealed differences in the benefits of total resection of different glioma subtypes, suggesting that glioma surgery should be planned according to its classification. Classification of gliomas into types according to their molecular features is also useful for both reasonable treatment design as it might explain the variability in patient response to same therapeutic approaches. Predictive molecular markers can be used to identify gliomas that are sensitive to distinct postoperative therapeutic approaches. Gliomas with different molecular features have their own unique immune microenvironment. The increasing use of molecular markers has also brought the implementation of targeted therapeutic approaches for some subtypes of gliomas, such as pediatric gliomas with BRAF mutations. Additionally, genomically defined patient subgroups allow for the study of more homogenous populations in clinical trials. Moreover, longitudinal molecular testing can facilitate precision medicine and an even better design of clinical trials. Altogether, the advances in molecular pathological detection of gliomas not only promote the precision diagnosis of tumors but also facilitate the progress of glioma therapy from surgery to clinical trials. Surgery Tumor resection is the most important step in the therapy of gliomas. The main aims of surgery are: (1) to perform histopathological and molecular pathology assessment that will guide postoperative adjuvant therapy, such as chemotherapy, radiation therapy, and immunotherapy; (2) to relieve the effect of tumor occupation; (3) to delay the malignant progression of tumor and improve prognosis; and (4) to alleviate glioma-related neurological deficits, including headache, postoperative glioma-related epilepsy, and other side effects. An earlier surgical resection is important for an improved prognosis of patients. Accumulated evidence have suggested that early surgical resection can prolong the overall survival of patients with glioma, delay malignant progression, and avoid neurological deficits. Total surgical resection is currently the safest resection approach for all different subtypes of gliomas. This means removing as much as possible of the tumor without causing permanent neurological dysfunction. Hence, if gliomas are not involved in eloquent areas, their total resection or supratotal resection is recommended for improving survival outcomes. In addition, compared with total resection, supratotal resection has been suggested to be more beneficial for prolonging the survival period and controlling glioma-related epilepsy. The definition of supratotal resection for glioblastoma is that the surgical region is larger than the enhancement region on T1-enhancement images compared with the region with high-intensity signals on T2-flair images. However, these conclusions have been mainly derived from studies of gliomas in the anterior temporal lobe, which is responsible for less neurological functions than other areas, implying that they might not be applicable to gliomas in other brain regions. To improve the accuracy of surgical resection and preserve fundamental neurological functions, such as motor, sensory, and language functions, the performance of awaken craniotomy is recommended. Both positive and negative mapping strategies, identifying areas that are associated or not associated with neurological functions, respectively, have been recommended in awaken craniotomy. However, it is still controversial whether positive or negative mapping should be used in craniotomy. Compared with positive mapping, negative mapping results in smaller surgical regions and is accompanied with lower percentages of intraoperative stimulated epilepsy and side effects following resection. However, negative mapping has also been associated with a higher rate of postoperative neurological impairments due to false-negative mapping results caused by the lack of cortical mapping experience or improper mapping measurements. Therefore, there is an urgent need for more advanced strategies or technologies to improve this situation. The choice of electrical stimulators also affects the outcome of surgery. Recently, we demonstrated that the sensitivity of bipolar electrical stimulators is not enough for the identification of subcortical fibers, possibly due to the limited region of effective stimulation. Given the reduced neuroplasticity, the use of bipolar electrical stimulators might cause surgical-related impairments of neurological functions. Hence, we recommended the use of bipolar electrical stimulators combined with monopolar electrical stimulators and motor-evoked potential technique to identify the cortical-spinal tract before removing gliomas adjacent to the internal capsule. For gliomas that are near to the posterior superior longitude fasciculus/arcuate fasciculus (<5 mm), we suggested a conservative strategy of tumor resection for preserving linguistic functions. The rapid development of molecular pathology has led to the classification of gliomas into more homogenous types/subtypes. However, it seems that their molecular characteristics cannot guide the surgery strategies used for gliomas which are tightly associated with the results of magnetic resonance imaging (MRI). Whether different resection approaches should be adopted for gliomas with different molecular characteristics remains unanswered. A multicenter study revealed that the extent of resection cannot stratify the overall survival of patients with oligodendrogliomas with IDH -mutation and 1p/19q codeletion. However, gross total resection improved the overall survival of patients with astrocytoma with IDH -mutation or glioblastoma with IDH -wildtype. Altogether, these findings suggested that different surgery strategies should be adopted for distinct types/subtypes of gliomas. However, obtaining the molecular features of gliomas before/during surgery remains a challenge. Fortunately, with the development of artificial intelligence (AI) and radiomics, predicting the molecular features of gliomas through models based on radiomic characteristics is a hopeful potential. Accumulating studies have successfully predicted the status of IDH , 1p/19q codeletion, and TERT mutation by using AI models. The increasing accuracy of these predictive models has brought them close to being used in clinical diagnosis, and enlarging the sample size and improving their advanced algorithms have the potential to further improve the accuracy and robustness of these models. We believe that validation of these predictive models through prospective clinical trials will make the use of molecular features for guiding tumor resection a reality in the near future. Radiotherapy and chemotherapy The standard postoperative treatment options for adult patients with glioma, including concomitant RT and DNA alkylating agent therapy, have not substantially changed over the last 15 years. Although some newly recognized types of glioma have been described in WHO CNS2021, which therapeutic approaches should be adopted for these tumors remains unclear. From the perspective of health economics and reducing overtreatment, it is important to identify which patients might benefit from intensive RT and concurrent or adjuvant chemotherapy. Similarly, it also critical to identify which patients might be cured with less intense RT or chemotherapy, especially in the case of pediatric patients. Overall, RT is adopted for most adult patients with diffuse gliomas, and different doses are recommended for patients with grade 2 and grade 3/4 tumors, respectively. However, the recommendation of RT doses is based on the results of clinical trials using patients classified by histological features. Thus, the impact of molecular subgroups on the RT dose selection, especially what RT dose should be used for patients with grade 4 tumors defined only by molecular features is still an open question. Recently, both our retrospective and prospective studies indicated that high-dose RT (>54 Gy) should be adopted for patients with histological grade 2/3 IDH -mutant astrocytoma and histological grade 2/3 IDH -wildtype gliomas. Compared with RT, the association between molecular features and chemotherapy is relatively clearer. Here, we also summarize the molecular markers with predictive significance for guiding postoperative chemotherapy treatment of patients with glioma. O 6 -methylguanine-DNA methyltransferase (MGMT) promoter methylation Among predictive markers, the presence of MGMT promoter methylation has been associated with benefit from alkylating-agent chemotherapy in patients with glioblastoma, particularly elderly patients (aged ≥65 years). The DNA-alkylating agent temozolomide (TMZ) is the first in a class of drugs used in the postoperative treatment of glioblastoma. MGMT, a DNA repair enzyme, can rapidly repair major temozolomide-induced DNA-adducts, 6-O-methylguanine, via self-alkylation. The alkylated MGMT is then degraded through the ubiquitylation pathway. Thus, the levels of expression of MGMT correspond to the repair capacity of cellular 6-O-methylguanine, and deficiency in the expression of MGMT in glioma has been acknowledged as a predictive marker for TMZ sensitivity. The cysteine-phosphate-guanine (CpG) island (CGI) in the 5′ promoter region of MGMT is susceptible to DNA methylation, which suppresses MGMT transcription. The level of expression of MGMT strongly depends on the level of methylation of its promoter region. In particular, MGMT -promoter methylation, occurring in approximately 40% of glioblastoma, has been closely associated with the benefit from TMZ therapy and prolonged survival of patients with glioblastoma. In addition, several clinical trials or studies have also revealed that MGMT -promoter methylation is a highly relevant biomarker for guiding treatment with temozolomide. Taken together, MGMT -promoter methylation can be used as a predictive marker for TMZ sensitivity. However, this finding has not been obtained from studies using a homogeneous cohort of patients with glioblastoma with IDH -wildtype, as glioblastoma cases included both IDH -mutant (<15%) and IDH -wildtype (>85%) prior to WHO CNS2021. Several studies have recently shown that the predictive role of MGMT -promoter methylation for the response to treatment with temozolomide might be restricted to glioblastoma with IDH -wildtype (WHO CNS2016). MGMT promoter methylation is present in most IDH -mutant gliomas and might thus serve as a prognosis but not as a predictive marker. This might be due to the fact that the cutoff value determined in IDH -wildtype glioblastoma cases might not be suitable for IDH -mutant cases. Our recent study using a homogenous cohort of patients with astrocytoma with IDH -mutant grade 4 showed that although MGMT promoter methylation has predictive value in this type of glioma, its cutoff value should be higher than that for glioblastoma with IDH -wildtype. The levels of methylation of the MGMT promoter can also be used for the stratification of the progression-free survival of patients with astrocytoma with IDH -mutant grade 2 or 3 under TMZ therapy by using cutoff values significantly higher than those commonly used in IDH -wildtype glioblastoma cases. Together, these findings strongly suggested that the predictive cutoff value for MGMT promoter methylation in IDH -mutant gliomas must be reassessed. The predictive value of MGMT promoter methylation should also be reevaluated in cases where there have been changes in tumor classification in WHO CNS2021 (e.g., glioblastoma, IDH -wildtype determined by molecular characteristics but without morphological evidence) compared with those in WHO CNS2016. Notably, given that the MGMT gene is located on 10q26, whether loss of chromosome 10 or 10q affects the predictive value of MGMT promoter methylation remains an important issue. MGMT promoter methylation status has been used as a stratification factor for patient selection in clinical trials for gliomas, including glioblastoma with IDH -wildtype and astrocytoma with IDH -mutant. However, the use of MGMT promoter methylation has faced challenges in clinical practices due to the fact that there are no alternative treatment choices for cases with unmethylated MGMT promoter and due to the perceived uncertainty of test results. Alternative treatment choices rely on the success of clinical trials with unmethylated MGMT promoter gliomas. The uncertainty of test results is mainly due to the lack of wide availability of standardized tests, given the absence of an ideal testing method and the lack of a defined accurate cutoff. Currently, there are several methods used for MGMT promoter methylation testing, including pyrosequencing (PSQ), gel-based methylation-specific PCR (MSP), methylation-specific quantitative PCR, methylation-specific quantitative PCR plus specific probe, MethyLight quantitative PCR, methylation-sensitive high-resolution melting, methylation-specific, and multiplex ligation-dependent probe amplification and microarray chips, that is, HM-850K chips. Among these methods, PSQ and MSP appear to be more prognostic for the overall survival of patients with glioma receiving TMZ. However, the best CpG sites and thresholds for these quantitative methods remains ambiguous. The MGMT promoter CGI contains 98 individual CpGs, named CpGs 1–98 depending on their 5′-to 3′-location in the 762 bp sequence of the promoter. Likewise, the CpG sites from 76 bp to 80 bp and 84 bp to 87 bp of MSP and those from 72 bp to 95 bp of PSQ have also been explored. Compared with MSP , high heterogeneity has been reported for CpG methylation in PSQ . In spite of this, the number or which CpGs in the MGMT promoter CGI should be selected remains a controversial issue for PSQ testing, with various combinations, such as those of CpGs 72–83, 72–80, 72–77, 74–78, 74–89, 76–79, and 80–83 being used in distinct studies. We have systematically compared the predictive value of all combinations within CpGs 72–82 on the expression of MGMT mRNA through analyzing paired samples using both MGMT methylation PSQ testing and mRNA expression data and revealed that the differences in the predictive value among combinations with four or more CpGs within CpGs 72–82 were marginal. This finding might explain the similar results obtained when different CpGs were examined. The cutoff value is another important issue for PSQ testing of MGMT promoter methylation, especially for cases in which the levels of methylation are in the “gray zone” between a true methylated and unmethylated status. We have successfully developed a novel analytical model to judge the methylation status of cases in the “gray zone.” This novel model evaluates the methylation status of each selected CpG according to its own cutoff value and defines MGMT methylation as occurring when the methylation of at least eight CpGs exceeds the respective threshold. We further demonstrated that this novel model is particularly useful in cases with “gray zone” results according to the traditional testing approach. The only drawback was that the optimal cutoff value for each CpG needed to be adjusted as it was limited by the retrospective nature and the relatively small population size of our study. Taken together, the evaluation of the MGMT promoter methylation status should be performed using validated testing methods, and the results should be properly analyzed for the best patient care. 1p and 19q codeletion Apart from being a diagnostic markers, two phase III clinical trials have revealed that 1p/19q codeletion is also an independent predictive biomarker of benefit from upfront combined RT and chemotherapy with procarbazine, lomustine (CCNU), and vincristine (PCV). However, the mechanism underlying the favorable treatment responses of patients with IDH -mutant and 1p/19q-codeleted gliomas remains poorly understood. A systematic functional investigation of genes located on chromosome 1p and 19q, whose expression levels also have prognostic value for non-1p/19q-codeleted gliomas, might address this question. Importantly, whether codeletion of 1p/19q is also a predictive marker for TMZ treatment remains to be answered in clinical trials, given that PCV has more side effects than TMZ and the relatively long-term survival (median overall survival of >10 years) of patients with oligodendroglioma with IDH -mutant and 1p/19q-codeleted. Of note, only whole-arm 1p/19q codeletion but not partial deletions on either chromosome arm are predictive biomarkers. In addition, the frequency of false-positive FISH 1p/19q codeletion in adult diffuse astrocytic gliomas has been found to be relatively high. Thus, special care should be taken in interpreting positive FISH results, especially for IDH -wildtype gliomas or tumors with IDH -mutant but without TERT promoter mutations. Other molecular features associated with chemotherapy Although MGMT promoter methylation is the only commonly acknowledged predictive biomarker for the response of gliomas to TMZ, the discordance between MGMT promoter methylation and the levels of protein expression in a small subset of cases has suggested the existence of other mechanisms contributing to the upregulation of MGMT. Such potential mechanisms include the MGMT promoter super-enhancer and MGMT rearrangement. Likewise, miR-181d has been found to also lead to decreased mRNA stability or reduced protein translation by binding to the 3′ untranslated region of MGMT transcripts and thus can be used to predict the TMZ response of glioblastomas with unmethylated MGMT promoter. In addition to MGMT , DNA mismatch repair (MMR) defects caused by mutations of MMR genes also lead to TMZ resistance in recurrent gliomas. Such MMR defects are more likely to occur in recurrent tumors of astrocytoma with IDH -mutant and MGMT promoter methylation. We have identified a novel DNA methylation-based signature with 31 CpG sites, which predicts the responses of glioblastomas with unmethylated MGMT promoter to TMZ. All of these findings suggested that additional predictive biomarkers should be considered in the precision management of gliomas. Increasing evidence have shown that RNA regulation also plays important roles in the response of gliomas to TMZ. For instance, the increased expression of c-MET or activation of MET signaling pathway contributes to TMZ resistance, especially in secondary glioblastomas. Upregulation of the expression of long noncoding RNA lnc-TALC also enhances the TMZ resistance of glioblastomas via promoting the expression of c-MET through the competitive binding of miR-20b-3p. Circle RNA circASAP1, whose expression is known to be significantly increased in recurrent glioblastoma tissues and TMZ-resistant cells, promotes the TMZ resistance of gliomas via upregulating the expression of NRAS by sponge absorption of miR-502–5p. RNA N6-methyladenosine (m 6 A) has also been shown to play an important role in the TMZ-resistance of gliomas. Interestingly, m 6 A is dynamically regulated by methyltransferases (“writers”), binding proteins (“readers”), and demethylases (“erasers”). Increased levels of m 6 A modifications have been positively associated with glioma malignancy and chemotherapy resistance, and the elevated levels of expression of METTL3, a writer of m 6 A, have also been shown to be required for the malignant progression and TMZ resistance of gliomas. These findings suggested that additional stratification based on transcriptome profiles holds promise for further improving the predictive accuracy of the TMZ response of gliomas. Together, the above findings indicated that a molecular panel consisting of genomic alterations, DNA epigenetic alterations, and RNA profiles has the potential to predict TMZ responses of gliomas with or without MGMT promoter methylation. Targeted therapy With the increasing understanding of molecular features, the targeted therapy of gliomas has become a reality. In particular, IDH mutation, the most prominent genetic feature of adult gliomas, is known to affect cell death, epigenetic status, and metabolism of tumors via the synthesis of 2-hydroxyglutarate. Blocking this impact through the use of IDH1/IDH2 inhibitors has been shown to be promising in preclinical models. In a phase I study, ivosidenib (AG-120), a small-molecule inhibitor of IDH1, was shown to prolong disease control and reduce the growth of advanced gliomas with IDH mutations. Several other inhibitors are also currently under evaluation, and further clinical trials are expected to provide pivotal insights about the efficacy and toxicity of these compounds in patients. Regarding IDH -wildtype adult gliomas, whole exon sequencing of large samples revealed that the most common mutated oncogenic pathway of adult IDH -wildtype gliomas included receptor tyrosine kinase (RTK)–PI3K, TP53, and RB pathways. Both RTK inhibitors targeting EGFR and RTK–PI3K pathway inhibitors have been studied in clinical trials, however, without encouraging results. This might be associated with the high intratumoral heterogeneity and evolution of gliomas. Thus, multitargeted therapeutic approaches have greater potential to improve the survival of patients with gliomas. Regorafenib, a VEGF receptor 2 and multikinase inhibitor, has been found to increase the survival of patients with recurrent glioblastoma compared with CCNU in a randomized phase II trial. Notably, WHO CNS2021 also explicitly recommends the evaluation of fusion genes in adult gliomas, including FGFR3-TACC3, MET , EGFR , and NTRK fusions. These fusion genes are important therapeutic targets for gliomas. Interestingly, FGFR–TACC fusions occur in 3.5% of pediatric gliomas and approximately 2.9% of glioblastomas with IDH -wildtype and have been shown to commonly cooccur with CDK4 amplification. FGFR3-TACC3 –positive patients benefited from treatment with an FGFR inhibitor in a clinical study with a small sample size. MET fusions, including TFG–MET , CLIP2–MET , and PTPRZ1–MET , have become diagnostic molecular markers for newly defined infant-type gliomas. We have demonstrated that PTPRZ1–MET and MET exon 14 skipping exists in about 15% of adult secondary glioblastomas. In a phase I clinical trial, a novel small-molecule MET inhibitor, PLB1001, successfully suppressed the growth of tumor harboring a PTPRZ1–MET fusion. In addition, EGFR–SEPT14 (3.7%) and EGFR–PSPH (1.9%) fusions have also been reported in glioblastomas, with EGFR–SEPT14 activating the STAT3 signaling to confer sensitivity to EGFR inhibition in a preclinical study. These findings offer new hope for the treatment of gliomas. However, the mutational evolution of gliomas under therapy cannot be ignored, especially when subclone expansion is influenced by strong selection pressures and is accompanied by adaptation in response to treatment modalities. Although targeted therapies do not widely improve survival in patients with gliomas, a multimodal treatment approach based on the dynamic changes in molecular characteristics might improve survival outcomes and the quality of life in patients with gliomas. Immunotherapy Currently, immunotherapy of glioma remains a profound challenge. Although it has been acknowledged that CNS is not immune privileged, the unique immune microenvironment of gliomas resembles a “cold tumor” phenotype owing to the brain blood barrier. The “cold tumor” phenotype of gliomas has been associated with poor responses to immune stimulatory therapies, such as immune checkpoint blockade. Additionally, the relatively few coding mutations and high intratumor heterogeneity also limit the development of immunotherapies for gliomas. Nevertheless, the rapid development of molecular pathology has advanced our understanding on the genetic and immunological features of gliomas, thus offering adequate opportunities for the implementation of immunotherapy as a treatment option for gliomas. For instance, the IDH mutation, a genetic driver of about half of adult gliomas, is known to suppress leukocyte chemotaxis via reducing the expression of cytotoxic T lymphocyte-associated genes and interferon-γ (IFNγ)-inducible chemokines, including CXC-chemokine ligand 10 (CXCL10). A study at the single-cell level also demonstrated that lymphocytes, including T-cells and NK cells were enriched in IDH -wildtype gliomas. All of these findings indicated the distinct immune microenvironment of gliomas with different genetic characteristics, suggesting that the development of subsequent immunotherapy approaches based on the latest pathological classification of gliomas. Although clinical trials of immune checkpoint blockade targeting the PD1–PD1 ligand 1 (PDL1) axis failed to improve survival in all enrolled patients, a subsequent study showed that a subgroup of patients with specific molecular features might benefit from the PD1/PDL1 blockade. This controversy suggested the requirement for future studies aiming to identify molecular markers for immunotherapy. In particular, adoptive T-cell therapy holds considerable promise for the treatment of gliomas. Proper target selection is a prerequisite for the success of this therapeutic approach, which requires a full understanding of the molecular characteristics of gliomas, especially the specific markers expressed on cell plasma membrane. A recent clinical trial of CAR T-cells targeting EGFRvIII, HER2, and IL-13Rα2 failed to achieve benefit in gliomas, and this was attributed to their high intratumoral heterogeneity. Single-targeted CAR T-cells are known to kill only a portion of tumor cells expressing the target molecule, accompanied by the expansion of tumor cells without target expression. Therapeutic vaccination for gliomas is another promising potential therapeutic modality but has not been clinically verified. The vaccination approach is also strongly associated with specific molecular alterations of gliomas, such as EGFRvIII and IDH1 -R132H. A substantial problem with single-peptide vaccination is the immune escape caused by intratumoral heterogeneity and selective pressure that results in antigen loss and glioma recurrence. Thus, prediction and dynamic monitoring of molecular features during treatment has been a major issue for the immunotherapy of gliomas. However, several clinical and ethical barriers for the acquisition of longitudinal biopsy glioma samples still exist. MRI-based monitoring approaches also face difficulties including pseudoprogression, radiation-mediated necrosis, and difficulty in reflecting changes in molecular characteristics during treatment. The emergence of fluid biopsy sequencing based on cerebrospinal fluid has provided an alternative, though it is still in its early stages. Cerebrospinal liquid biopsy has the potential to improve the diagnosis, clinical care, and decision-making for gliomas. Altogether, current findings have pointed to the need for the continued development of predictive biomarkers and dynamic monitoring methods for immune-based therapies for gliomas. Tumor resection is the most important step in the therapy of gliomas. The main aims of surgery are: (1) to perform histopathological and molecular pathology assessment that will guide postoperative adjuvant therapy, such as chemotherapy, radiation therapy, and immunotherapy; (2) to relieve the effect of tumor occupation; (3) to delay the malignant progression of tumor and improve prognosis; and (4) to alleviate glioma-related neurological deficits, including headache, postoperative glioma-related epilepsy, and other side effects. An earlier surgical resection is important for an improved prognosis of patients. Accumulated evidence have suggested that early surgical resection can prolong the overall survival of patients with glioma, delay malignant progression, and avoid neurological deficits. Total surgical resection is currently the safest resection approach for all different subtypes of gliomas. This means removing as much as possible of the tumor without causing permanent neurological dysfunction. Hence, if gliomas are not involved in eloquent areas, their total resection or supratotal resection is recommended for improving survival outcomes. In addition, compared with total resection, supratotal resection has been suggested to be more beneficial for prolonging the survival period and controlling glioma-related epilepsy. The definition of supratotal resection for glioblastoma is that the surgical region is larger than the enhancement region on T1-enhancement images compared with the region with high-intensity signals on T2-flair images. However, these conclusions have been mainly derived from studies of gliomas in the anterior temporal lobe, which is responsible for less neurological functions than other areas, implying that they might not be applicable to gliomas in other brain regions. To improve the accuracy of surgical resection and preserve fundamental neurological functions, such as motor, sensory, and language functions, the performance of awaken craniotomy is recommended. Both positive and negative mapping strategies, identifying areas that are associated or not associated with neurological functions, respectively, have been recommended in awaken craniotomy. However, it is still controversial whether positive or negative mapping should be used in craniotomy. Compared with positive mapping, negative mapping results in smaller surgical regions and is accompanied with lower percentages of intraoperative stimulated epilepsy and side effects following resection. However, negative mapping has also been associated with a higher rate of postoperative neurological impairments due to false-negative mapping results caused by the lack of cortical mapping experience or improper mapping measurements. Therefore, there is an urgent need for more advanced strategies or technologies to improve this situation. The choice of electrical stimulators also affects the outcome of surgery. Recently, we demonstrated that the sensitivity of bipolar electrical stimulators is not enough for the identification of subcortical fibers, possibly due to the limited region of effective stimulation. Given the reduced neuroplasticity, the use of bipolar electrical stimulators might cause surgical-related impairments of neurological functions. Hence, we recommended the use of bipolar electrical stimulators combined with monopolar electrical stimulators and motor-evoked potential technique to identify the cortical-spinal tract before removing gliomas adjacent to the internal capsule. For gliomas that are near to the posterior superior longitude fasciculus/arcuate fasciculus (<5 mm), we suggested a conservative strategy of tumor resection for preserving linguistic functions. The rapid development of molecular pathology has led to the classification of gliomas into more homogenous types/subtypes. However, it seems that their molecular characteristics cannot guide the surgery strategies used for gliomas which are tightly associated with the results of magnetic resonance imaging (MRI). Whether different resection approaches should be adopted for gliomas with different molecular characteristics remains unanswered. A multicenter study revealed that the extent of resection cannot stratify the overall survival of patients with oligodendrogliomas with IDH -mutation and 1p/19q codeletion. However, gross total resection improved the overall survival of patients with astrocytoma with IDH -mutation or glioblastoma with IDH -wildtype. Altogether, these findings suggested that different surgery strategies should be adopted for distinct types/subtypes of gliomas. However, obtaining the molecular features of gliomas before/during surgery remains a challenge. Fortunately, with the development of artificial intelligence (AI) and radiomics, predicting the molecular features of gliomas through models based on radiomic characteristics is a hopeful potential. Accumulating studies have successfully predicted the status of IDH , 1p/19q codeletion, and TERT mutation by using AI models. The increasing accuracy of these predictive models has brought them close to being used in clinical diagnosis, and enlarging the sample size and improving their advanced algorithms have the potential to further improve the accuracy and robustness of these models. We believe that validation of these predictive models through prospective clinical trials will make the use of molecular features for guiding tumor resection a reality in the near future. The standard postoperative treatment options for adult patients with glioma, including concomitant RT and DNA alkylating agent therapy, have not substantially changed over the last 15 years. Although some newly recognized types of glioma have been described in WHO CNS2021, which therapeutic approaches should be adopted for these tumors remains unclear. From the perspective of health economics and reducing overtreatment, it is important to identify which patients might benefit from intensive RT and concurrent or adjuvant chemotherapy. Similarly, it also critical to identify which patients might be cured with less intense RT or chemotherapy, especially in the case of pediatric patients. Overall, RT is adopted for most adult patients with diffuse gliomas, and different doses are recommended for patients with grade 2 and grade 3/4 tumors, respectively. However, the recommendation of RT doses is based on the results of clinical trials using patients classified by histological features. Thus, the impact of molecular subgroups on the RT dose selection, especially what RT dose should be used for patients with grade 4 tumors defined only by molecular features is still an open question. Recently, both our retrospective and prospective studies indicated that high-dose RT (>54 Gy) should be adopted for patients with histological grade 2/3 IDH -mutant astrocytoma and histological grade 2/3 IDH -wildtype gliomas. Compared with RT, the association between molecular features and chemotherapy is relatively clearer. Here, we also summarize the molecular markers with predictive significance for guiding postoperative chemotherapy treatment of patients with glioma. O 6 -methylguanine-DNA methyltransferase (MGMT) promoter methylation Among predictive markers, the presence of MGMT promoter methylation has been associated with benefit from alkylating-agent chemotherapy in patients with glioblastoma, particularly elderly patients (aged ≥65 years). The DNA-alkylating agent temozolomide (TMZ) is the first in a class of drugs used in the postoperative treatment of glioblastoma. MGMT, a DNA repair enzyme, can rapidly repair major temozolomide-induced DNA-adducts, 6-O-methylguanine, via self-alkylation. The alkylated MGMT is then degraded through the ubiquitylation pathway. Thus, the levels of expression of MGMT correspond to the repair capacity of cellular 6-O-methylguanine, and deficiency in the expression of MGMT in glioma has been acknowledged as a predictive marker for TMZ sensitivity. The cysteine-phosphate-guanine (CpG) island (CGI) in the 5′ promoter region of MGMT is susceptible to DNA methylation, which suppresses MGMT transcription. The level of expression of MGMT strongly depends on the level of methylation of its promoter region. In particular, MGMT -promoter methylation, occurring in approximately 40% of glioblastoma, has been closely associated with the benefit from TMZ therapy and prolonged survival of patients with glioblastoma. In addition, several clinical trials or studies have also revealed that MGMT -promoter methylation is a highly relevant biomarker for guiding treatment with temozolomide. Taken together, MGMT -promoter methylation can be used as a predictive marker for TMZ sensitivity. However, this finding has not been obtained from studies using a homogeneous cohort of patients with glioblastoma with IDH -wildtype, as glioblastoma cases included both IDH -mutant (<15%) and IDH -wildtype (>85%) prior to WHO CNS2021. Several studies have recently shown that the predictive role of MGMT -promoter methylation for the response to treatment with temozolomide might be restricted to glioblastoma with IDH -wildtype (WHO CNS2016). MGMT promoter methylation is present in most IDH -mutant gliomas and might thus serve as a prognosis but not as a predictive marker. This might be due to the fact that the cutoff value determined in IDH -wildtype glioblastoma cases might not be suitable for IDH -mutant cases. Our recent study using a homogenous cohort of patients with astrocytoma with IDH -mutant grade 4 showed that although MGMT promoter methylation has predictive value in this type of glioma, its cutoff value should be higher than that for glioblastoma with IDH -wildtype. The levels of methylation of the MGMT promoter can also be used for the stratification of the progression-free survival of patients with astrocytoma with IDH -mutant grade 2 or 3 under TMZ therapy by using cutoff values significantly higher than those commonly used in IDH -wildtype glioblastoma cases. Together, these findings strongly suggested that the predictive cutoff value for MGMT promoter methylation in IDH -mutant gliomas must be reassessed. The predictive value of MGMT promoter methylation should also be reevaluated in cases where there have been changes in tumor classification in WHO CNS2021 (e.g., glioblastoma, IDH -wildtype determined by molecular characteristics but without morphological evidence) compared with those in WHO CNS2016. Notably, given that the MGMT gene is located on 10q26, whether loss of chromosome 10 or 10q affects the predictive value of MGMT promoter methylation remains an important issue. MGMT promoter methylation status has been used as a stratification factor for patient selection in clinical trials for gliomas, including glioblastoma with IDH -wildtype and astrocytoma with IDH -mutant. However, the use of MGMT promoter methylation has faced challenges in clinical practices due to the fact that there are no alternative treatment choices for cases with unmethylated MGMT promoter and due to the perceived uncertainty of test results. Alternative treatment choices rely on the success of clinical trials with unmethylated MGMT promoter gliomas. The uncertainty of test results is mainly due to the lack of wide availability of standardized tests, given the absence of an ideal testing method and the lack of a defined accurate cutoff. Currently, there are several methods used for MGMT promoter methylation testing, including pyrosequencing (PSQ), gel-based methylation-specific PCR (MSP), methylation-specific quantitative PCR, methylation-specific quantitative PCR plus specific probe, MethyLight quantitative PCR, methylation-sensitive high-resolution melting, methylation-specific, and multiplex ligation-dependent probe amplification and microarray chips, that is, HM-850K chips. Among these methods, PSQ and MSP appear to be more prognostic for the overall survival of patients with glioma receiving TMZ. However, the best CpG sites and thresholds for these quantitative methods remains ambiguous. The MGMT promoter CGI contains 98 individual CpGs, named CpGs 1–98 depending on their 5′-to 3′-location in the 762 bp sequence of the promoter. Likewise, the CpG sites from 76 bp to 80 bp and 84 bp to 87 bp of MSP and those from 72 bp to 95 bp of PSQ have also been explored. Compared with MSP , high heterogeneity has been reported for CpG methylation in PSQ . In spite of this, the number or which CpGs in the MGMT promoter CGI should be selected remains a controversial issue for PSQ testing, with various combinations, such as those of CpGs 72–83, 72–80, 72–77, 74–78, 74–89, 76–79, and 80–83 being used in distinct studies. We have systematically compared the predictive value of all combinations within CpGs 72–82 on the expression of MGMT mRNA through analyzing paired samples using both MGMT methylation PSQ testing and mRNA expression data and revealed that the differences in the predictive value among combinations with four or more CpGs within CpGs 72–82 were marginal. This finding might explain the similar results obtained when different CpGs were examined. The cutoff value is another important issue for PSQ testing of MGMT promoter methylation, especially for cases in which the levels of methylation are in the “gray zone” between a true methylated and unmethylated status. We have successfully developed a novel analytical model to judge the methylation status of cases in the “gray zone.” This novel model evaluates the methylation status of each selected CpG according to its own cutoff value and defines MGMT methylation as occurring when the methylation of at least eight CpGs exceeds the respective threshold. We further demonstrated that this novel model is particularly useful in cases with “gray zone” results according to the traditional testing approach. The only drawback was that the optimal cutoff value for each CpG needed to be adjusted as it was limited by the retrospective nature and the relatively small population size of our study. Taken together, the evaluation of the MGMT promoter methylation status should be performed using validated testing methods, and the results should be properly analyzed for the best patient care. 1p and 19q codeletion Apart from being a diagnostic markers, two phase III clinical trials have revealed that 1p/19q codeletion is also an independent predictive biomarker of benefit from upfront combined RT and chemotherapy with procarbazine, lomustine (CCNU), and vincristine (PCV). However, the mechanism underlying the favorable treatment responses of patients with IDH -mutant and 1p/19q-codeleted gliomas remains poorly understood. A systematic functional investigation of genes located on chromosome 1p and 19q, whose expression levels also have prognostic value for non-1p/19q-codeleted gliomas, might address this question. Importantly, whether codeletion of 1p/19q is also a predictive marker for TMZ treatment remains to be answered in clinical trials, given that PCV has more side effects than TMZ and the relatively long-term survival (median overall survival of >10 years) of patients with oligodendroglioma with IDH -mutant and 1p/19q-codeleted. Of note, only whole-arm 1p/19q codeletion but not partial deletions on either chromosome arm are predictive biomarkers. In addition, the frequency of false-positive FISH 1p/19q codeletion in adult diffuse astrocytic gliomas has been found to be relatively high. Thus, special care should be taken in interpreting positive FISH results, especially for IDH -wildtype gliomas or tumors with IDH -mutant but without TERT promoter mutations. Other molecular features associated with chemotherapy Although MGMT promoter methylation is the only commonly acknowledged predictive biomarker for the response of gliomas to TMZ, the discordance between MGMT promoter methylation and the levels of protein expression in a small subset of cases has suggested the existence of other mechanisms contributing to the upregulation of MGMT. Such potential mechanisms include the MGMT promoter super-enhancer and MGMT rearrangement. Likewise, miR-181d has been found to also lead to decreased mRNA stability or reduced protein translation by binding to the 3′ untranslated region of MGMT transcripts and thus can be used to predict the TMZ response of glioblastomas with unmethylated MGMT promoter. In addition to MGMT , DNA mismatch repair (MMR) defects caused by mutations of MMR genes also lead to TMZ resistance in recurrent gliomas. Such MMR defects are more likely to occur in recurrent tumors of astrocytoma with IDH -mutant and MGMT promoter methylation. We have identified a novel DNA methylation-based signature with 31 CpG sites, which predicts the responses of glioblastomas with unmethylated MGMT promoter to TMZ. All of these findings suggested that additional predictive biomarkers should be considered in the precision management of gliomas. Increasing evidence have shown that RNA regulation also plays important roles in the response of gliomas to TMZ. For instance, the increased expression of c-MET or activation of MET signaling pathway contributes to TMZ resistance, especially in secondary glioblastomas. Upregulation of the expression of long noncoding RNA lnc-TALC also enhances the TMZ resistance of glioblastomas via promoting the expression of c-MET through the competitive binding of miR-20b-3p. Circle RNA circASAP1, whose expression is known to be significantly increased in recurrent glioblastoma tissues and TMZ-resistant cells, promotes the TMZ resistance of gliomas via upregulating the expression of NRAS by sponge absorption of miR-502–5p. RNA N6-methyladenosine (m 6 A) has also been shown to play an important role in the TMZ-resistance of gliomas. Interestingly, m 6 A is dynamically regulated by methyltransferases (“writers”), binding proteins (“readers”), and demethylases (“erasers”). Increased levels of m 6 A modifications have been positively associated with glioma malignancy and chemotherapy resistance, and the elevated levels of expression of METTL3, a writer of m 6 A, have also been shown to be required for the malignant progression and TMZ resistance of gliomas. These findings suggested that additional stratification based on transcriptome profiles holds promise for further improving the predictive accuracy of the TMZ response of gliomas. Together, the above findings indicated that a molecular panel consisting of genomic alterations, DNA epigenetic alterations, and RNA profiles has the potential to predict TMZ responses of gliomas with or without MGMT promoter methylation. 6 -methylguanine-DNA methyltransferase (MGMT) promoter methylation Among predictive markers, the presence of MGMT promoter methylation has been associated with benefit from alkylating-agent chemotherapy in patients with glioblastoma, particularly elderly patients (aged ≥65 years). The DNA-alkylating agent temozolomide (TMZ) is the first in a class of drugs used in the postoperative treatment of glioblastoma. MGMT, a DNA repair enzyme, can rapidly repair major temozolomide-induced DNA-adducts, 6-O-methylguanine, via self-alkylation. The alkylated MGMT is then degraded through the ubiquitylation pathway. Thus, the levels of expression of MGMT correspond to the repair capacity of cellular 6-O-methylguanine, and deficiency in the expression of MGMT in glioma has been acknowledged as a predictive marker for TMZ sensitivity. The cysteine-phosphate-guanine (CpG) island (CGI) in the 5′ promoter region of MGMT is susceptible to DNA methylation, which suppresses MGMT transcription. The level of expression of MGMT strongly depends on the level of methylation of its promoter region. In particular, MGMT -promoter methylation, occurring in approximately 40% of glioblastoma, has been closely associated with the benefit from TMZ therapy and prolonged survival of patients with glioblastoma. In addition, several clinical trials or studies have also revealed that MGMT -promoter methylation is a highly relevant biomarker for guiding treatment with temozolomide. Taken together, MGMT -promoter methylation can be used as a predictive marker for TMZ sensitivity. However, this finding has not been obtained from studies using a homogeneous cohort of patients with glioblastoma with IDH -wildtype, as glioblastoma cases included both IDH -mutant (<15%) and IDH -wildtype (>85%) prior to WHO CNS2021. Several studies have recently shown that the predictive role of MGMT -promoter methylation for the response to treatment with temozolomide might be restricted to glioblastoma with IDH -wildtype (WHO CNS2016). MGMT promoter methylation is present in most IDH -mutant gliomas and might thus serve as a prognosis but not as a predictive marker. This might be due to the fact that the cutoff value determined in IDH -wildtype glioblastoma cases might not be suitable for IDH -mutant cases. Our recent study using a homogenous cohort of patients with astrocytoma with IDH -mutant grade 4 showed that although MGMT promoter methylation has predictive value in this type of glioma, its cutoff value should be higher than that for glioblastoma with IDH -wildtype. The levels of methylation of the MGMT promoter can also be used for the stratification of the progression-free survival of patients with astrocytoma with IDH -mutant grade 2 or 3 under TMZ therapy by using cutoff values significantly higher than those commonly used in IDH -wildtype glioblastoma cases. Together, these findings strongly suggested that the predictive cutoff value for MGMT promoter methylation in IDH -mutant gliomas must be reassessed. The predictive value of MGMT promoter methylation should also be reevaluated in cases where there have been changes in tumor classification in WHO CNS2021 (e.g., glioblastoma, IDH -wildtype determined by molecular characteristics but without morphological evidence) compared with those in WHO CNS2016. Notably, given that the MGMT gene is located on 10q26, whether loss of chromosome 10 or 10q affects the predictive value of MGMT promoter methylation remains an important issue. MGMT promoter methylation status has been used as a stratification factor for patient selection in clinical trials for gliomas, including glioblastoma with IDH -wildtype and astrocytoma with IDH -mutant. However, the use of MGMT promoter methylation has faced challenges in clinical practices due to the fact that there are no alternative treatment choices for cases with unmethylated MGMT promoter and due to the perceived uncertainty of test results. Alternative treatment choices rely on the success of clinical trials with unmethylated MGMT promoter gliomas. The uncertainty of test results is mainly due to the lack of wide availability of standardized tests, given the absence of an ideal testing method and the lack of a defined accurate cutoff. Currently, there are several methods used for MGMT promoter methylation testing, including pyrosequencing (PSQ), gel-based methylation-specific PCR (MSP), methylation-specific quantitative PCR, methylation-specific quantitative PCR plus specific probe, MethyLight quantitative PCR, methylation-sensitive high-resolution melting, methylation-specific, and multiplex ligation-dependent probe amplification and microarray chips, that is, HM-850K chips. Among these methods, PSQ and MSP appear to be more prognostic for the overall survival of patients with glioma receiving TMZ. However, the best CpG sites and thresholds for these quantitative methods remains ambiguous. The MGMT promoter CGI contains 98 individual CpGs, named CpGs 1–98 depending on their 5′-to 3′-location in the 762 bp sequence of the promoter. Likewise, the CpG sites from 76 bp to 80 bp and 84 bp to 87 bp of MSP and those from 72 bp to 95 bp of PSQ have also been explored. Compared with MSP , high heterogeneity has been reported for CpG methylation in PSQ . In spite of this, the number or which CpGs in the MGMT promoter CGI should be selected remains a controversial issue for PSQ testing, with various combinations, such as those of CpGs 72–83, 72–80, 72–77, 74–78, 74–89, 76–79, and 80–83 being used in distinct studies. We have systematically compared the predictive value of all combinations within CpGs 72–82 on the expression of MGMT mRNA through analyzing paired samples using both MGMT methylation PSQ testing and mRNA expression data and revealed that the differences in the predictive value among combinations with four or more CpGs within CpGs 72–82 were marginal. This finding might explain the similar results obtained when different CpGs were examined. The cutoff value is another important issue for PSQ testing of MGMT promoter methylation, especially for cases in which the levels of methylation are in the “gray zone” between a true methylated and unmethylated status. We have successfully developed a novel analytical model to judge the methylation status of cases in the “gray zone.” This novel model evaluates the methylation status of each selected CpG according to its own cutoff value and defines MGMT methylation as occurring when the methylation of at least eight CpGs exceeds the respective threshold. We further demonstrated that this novel model is particularly useful in cases with “gray zone” results according to the traditional testing approach. The only drawback was that the optimal cutoff value for each CpG needed to be adjusted as it was limited by the retrospective nature and the relatively small population size of our study. Taken together, the evaluation of the MGMT promoter methylation status should be performed using validated testing methods, and the results should be properly analyzed for the best patient care. Apart from being a diagnostic markers, two phase III clinical trials have revealed that 1p/19q codeletion is also an independent predictive biomarker of benefit from upfront combined RT and chemotherapy with procarbazine, lomustine (CCNU), and vincristine (PCV). However, the mechanism underlying the favorable treatment responses of patients with IDH -mutant and 1p/19q-codeleted gliomas remains poorly understood. A systematic functional investigation of genes located on chromosome 1p and 19q, whose expression levels also have prognostic value for non-1p/19q-codeleted gliomas, might address this question. Importantly, whether codeletion of 1p/19q is also a predictive marker for TMZ treatment remains to be answered in clinical trials, given that PCV has more side effects than TMZ and the relatively long-term survival (median overall survival of >10 years) of patients with oligodendroglioma with IDH -mutant and 1p/19q-codeleted. Of note, only whole-arm 1p/19q codeletion but not partial deletions on either chromosome arm are predictive biomarkers. In addition, the frequency of false-positive FISH 1p/19q codeletion in adult diffuse astrocytic gliomas has been found to be relatively high. Thus, special care should be taken in interpreting positive FISH results, especially for IDH -wildtype gliomas or tumors with IDH -mutant but without TERT promoter mutations. Although MGMT promoter methylation is the only commonly acknowledged predictive biomarker for the response of gliomas to TMZ, the discordance between MGMT promoter methylation and the levels of protein expression in a small subset of cases has suggested the existence of other mechanisms contributing to the upregulation of MGMT. Such potential mechanisms include the MGMT promoter super-enhancer and MGMT rearrangement. Likewise, miR-181d has been found to also lead to decreased mRNA stability or reduced protein translation by binding to the 3′ untranslated region of MGMT transcripts and thus can be used to predict the TMZ response of glioblastomas with unmethylated MGMT promoter. In addition to MGMT , DNA mismatch repair (MMR) defects caused by mutations of MMR genes also lead to TMZ resistance in recurrent gliomas. Such MMR defects are more likely to occur in recurrent tumors of astrocytoma with IDH -mutant and MGMT promoter methylation. We have identified a novel DNA methylation-based signature with 31 CpG sites, which predicts the responses of glioblastomas with unmethylated MGMT promoter to TMZ. All of these findings suggested that additional predictive biomarkers should be considered in the precision management of gliomas. Increasing evidence have shown that RNA regulation also plays important roles in the response of gliomas to TMZ. For instance, the increased expression of c-MET or activation of MET signaling pathway contributes to TMZ resistance, especially in secondary glioblastomas. Upregulation of the expression of long noncoding RNA lnc-TALC also enhances the TMZ resistance of glioblastomas via promoting the expression of c-MET through the competitive binding of miR-20b-3p. Circle RNA circASAP1, whose expression is known to be significantly increased in recurrent glioblastoma tissues and TMZ-resistant cells, promotes the TMZ resistance of gliomas via upregulating the expression of NRAS by sponge absorption of miR-502–5p. RNA N6-methyladenosine (m 6 A) has also been shown to play an important role in the TMZ-resistance of gliomas. Interestingly, m 6 A is dynamically regulated by methyltransferases (“writers”), binding proteins (“readers”), and demethylases (“erasers”). Increased levels of m 6 A modifications have been positively associated with glioma malignancy and chemotherapy resistance, and the elevated levels of expression of METTL3, a writer of m 6 A, have also been shown to be required for the malignant progression and TMZ resistance of gliomas. These findings suggested that additional stratification based on transcriptome profiles holds promise for further improving the predictive accuracy of the TMZ response of gliomas. Together, the above findings indicated that a molecular panel consisting of genomic alterations, DNA epigenetic alterations, and RNA profiles has the potential to predict TMZ responses of gliomas with or without MGMT promoter methylation. With the increasing understanding of molecular features, the targeted therapy of gliomas has become a reality. In particular, IDH mutation, the most prominent genetic feature of adult gliomas, is known to affect cell death, epigenetic status, and metabolism of tumors via the synthesis of 2-hydroxyglutarate. Blocking this impact through the use of IDH1/IDH2 inhibitors has been shown to be promising in preclinical models. In a phase I study, ivosidenib (AG-120), a small-molecule inhibitor of IDH1, was shown to prolong disease control and reduce the growth of advanced gliomas with IDH mutations. Several other inhibitors are also currently under evaluation, and further clinical trials are expected to provide pivotal insights about the efficacy and toxicity of these compounds in patients. Regarding IDH -wildtype adult gliomas, whole exon sequencing of large samples revealed that the most common mutated oncogenic pathway of adult IDH -wildtype gliomas included receptor tyrosine kinase (RTK)–PI3K, TP53, and RB pathways. Both RTK inhibitors targeting EGFR and RTK–PI3K pathway inhibitors have been studied in clinical trials, however, without encouraging results. This might be associated with the high intratumoral heterogeneity and evolution of gliomas. Thus, multitargeted therapeutic approaches have greater potential to improve the survival of patients with gliomas. Regorafenib, a VEGF receptor 2 and multikinase inhibitor, has been found to increase the survival of patients with recurrent glioblastoma compared with CCNU in a randomized phase II trial. Notably, WHO CNS2021 also explicitly recommends the evaluation of fusion genes in adult gliomas, including FGFR3-TACC3, MET , EGFR , and NTRK fusions. These fusion genes are important therapeutic targets for gliomas. Interestingly, FGFR–TACC fusions occur in 3.5% of pediatric gliomas and approximately 2.9% of glioblastomas with IDH -wildtype and have been shown to commonly cooccur with CDK4 amplification. FGFR3-TACC3 –positive patients benefited from treatment with an FGFR inhibitor in a clinical study with a small sample size. MET fusions, including TFG–MET , CLIP2–MET , and PTPRZ1–MET , have become diagnostic molecular markers for newly defined infant-type gliomas. We have demonstrated that PTPRZ1–MET and MET exon 14 skipping exists in about 15% of adult secondary glioblastomas. In a phase I clinical trial, a novel small-molecule MET inhibitor, PLB1001, successfully suppressed the growth of tumor harboring a PTPRZ1–MET fusion. In addition, EGFR–SEPT14 (3.7%) and EGFR–PSPH (1.9%) fusions have also been reported in glioblastomas, with EGFR–SEPT14 activating the STAT3 signaling to confer sensitivity to EGFR inhibition in a preclinical study. These findings offer new hope for the treatment of gliomas. However, the mutational evolution of gliomas under therapy cannot be ignored, especially when subclone expansion is influenced by strong selection pressures and is accompanied by adaptation in response to treatment modalities. Although targeted therapies do not widely improve survival in patients with gliomas, a multimodal treatment approach based on the dynamic changes in molecular characteristics might improve survival outcomes and the quality of life in patients with gliomas. Currently, immunotherapy of glioma remains a profound challenge. Although it has been acknowledged that CNS is not immune privileged, the unique immune microenvironment of gliomas resembles a “cold tumor” phenotype owing to the brain blood barrier. The “cold tumor” phenotype of gliomas has been associated with poor responses to immune stimulatory therapies, such as immune checkpoint blockade. Additionally, the relatively few coding mutations and high intratumor heterogeneity also limit the development of immunotherapies for gliomas. Nevertheless, the rapid development of molecular pathology has advanced our understanding on the genetic and immunological features of gliomas, thus offering adequate opportunities for the implementation of immunotherapy as a treatment option for gliomas. For instance, the IDH mutation, a genetic driver of about half of adult gliomas, is known to suppress leukocyte chemotaxis via reducing the expression of cytotoxic T lymphocyte-associated genes and interferon-γ (IFNγ)-inducible chemokines, including CXC-chemokine ligand 10 (CXCL10). A study at the single-cell level also demonstrated that lymphocytes, including T-cells and NK cells were enriched in IDH -wildtype gliomas. All of these findings indicated the distinct immune microenvironment of gliomas with different genetic characteristics, suggesting that the development of subsequent immunotherapy approaches based on the latest pathological classification of gliomas. Although clinical trials of immune checkpoint blockade targeting the PD1–PD1 ligand 1 (PDL1) axis failed to improve survival in all enrolled patients, a subsequent study showed that a subgroup of patients with specific molecular features might benefit from the PD1/PDL1 blockade. This controversy suggested the requirement for future studies aiming to identify molecular markers for immunotherapy. In particular, adoptive T-cell therapy holds considerable promise for the treatment of gliomas. Proper target selection is a prerequisite for the success of this therapeutic approach, which requires a full understanding of the molecular characteristics of gliomas, especially the specific markers expressed on cell plasma membrane. A recent clinical trial of CAR T-cells targeting EGFRvIII, HER2, and IL-13Rα2 failed to achieve benefit in gliomas, and this was attributed to their high intratumoral heterogeneity. Single-targeted CAR T-cells are known to kill only a portion of tumor cells expressing the target molecule, accompanied by the expansion of tumor cells without target expression. Therapeutic vaccination for gliomas is another promising potential therapeutic modality but has not been clinically verified. The vaccination approach is also strongly associated with specific molecular alterations of gliomas, such as EGFRvIII and IDH1 -R132H. A substantial problem with single-peptide vaccination is the immune escape caused by intratumoral heterogeneity and selective pressure that results in antigen loss and glioma recurrence. Thus, prediction and dynamic monitoring of molecular features during treatment has been a major issue for the immunotherapy of gliomas. However, several clinical and ethical barriers for the acquisition of longitudinal biopsy glioma samples still exist. MRI-based monitoring approaches also face difficulties including pseudoprogression, radiation-mediated necrosis, and difficulty in reflecting changes in molecular characteristics during treatment. The emergence of fluid biopsy sequencing based on cerebrospinal fluid has provided an alternative, though it is still in its early stages. Cerebrospinal liquid biopsy has the potential to improve the diagnosis, clinical care, and decision-making for gliomas. Altogether, current findings have pointed to the need for the continued development of predictive biomarkers and dynamic monitoring methods for immune-based therapies for gliomas. After nearly 20 years of research, gliomas remain universally lethal. However, the rapid development of molecular pathology has enabled the more accurate classification of gliomas. This is expected to gradually impact glioma surgery approaches, RT and chemotherapy regimens, and the development of targeted therapy and immunotherapy of gliomas [Table ]. Based on the molecular or biologically defined classification of gliomas, the design of clinical review studies and prospective clinical trials of gliomas will be more accurate. However, advances in molecular pathology have also brought challenges to clinical molecular testing, pathological diagnosis, and clinical practice of glioma management. Several challenges regarding accurate diagnosis, balance between testing cost, testing accuracy, and timely diagnosis continue to exist in clinical practice. Importantly, the selection of different therapeutic approaches for different pathological types of gliomas will become a reality in the clinical research of gliomas in the future. Of note, both the molecular detection and treatment of gliomas have been greatly puzzled by the high intratumor heterogeneity of gliomas, the molecular evolution of tumors under treatment, the switch of subclones, wildly transition of transcription status of tumor, and dynamic alterations of immune infiltration. Tracing the changes in molecular characteristics, immune statues, and transcriptional alterations of gliomas during treatment has the potential to improve the long-term prognosis of gliomas. The emergence of liquid biopsies testing, particularly those based on cerebrospinal fluid, has shed new light on the dynamic monitoring of glioma molecular features. There have been no clear indications of racial differences regarding the development and type of gliomas. However, it has been shown that glioma risk is associated with the extent of estimated European genetic ancestry in African-Americans and Hispanics. Currently, the classification system of gliomas is mainly based on the molecular features and multiple omics data from Caucasian populations. Thus, whether this molecular classification is suitable for the East Asian populations remains to be addressed. The establishment of an optimized molecular diagnosis and treatment system for the East Asian population should be an important topic for glioma research in China. This study was supported by grants from the Beijing Nova Program (No. Z201100006820118); the National Natural Science Foundation of China (Nos. 82192894 and 81903078); Research Unit of Accurate Diagnosis, Treatment, and Translational Medicine of Brain Tumors Chinese (No. 2019-I2M-5-021). None
|
Knowledge level of cardio-oncology in oncologist and cardiologist: a survey in China
|
006b7e54-fce4-4f1c-a303-318f78058812
|
10106198
|
Internal Medicine[mh]
|
This study was approved by the Institutional Review Boards of Cancer Hospital, Chinese Academy of Medical Sciences (No. NCT03537339), and the protocol has been registered on ClinicalTrials.gov with the number NCC201712029.
The authors would like to thank Sandra Wang, China iCardioOncology Network (CiON), for help in the design of this study and the distribution of the questionnaire. We thank the doctors and scholars from all over China who helped to distribute and collect the questionnaire and all the participants who completed it.
None.
Supplemental Digital Content
|
MG7-Ag, hTERT, and TFF2 identified high-risk intestinal metaplasia and constituted a prediction model for gastric cancer
|
b9334577-63ea-4fd3-b2d1-022223b2a88b
|
10106211
|
Anatomy[mh]
|
This work was supported by the Shaanxi Foundation for Innovation Team of Science and Technology (No. 2018TD-003) and the Project from State Key Laboratory of Cancer Biology (No. CBSKL2019ZZ07).
None.
Supplemental Digital Content
|
New Year's Eve in otorhinolaryngology: a 16-year retrospective evaluation
|
86297a58-edc8-4036-bf29-7b0b41e43980
|
10106316
|
Otolaryngology[mh]
|
For many decades, fireworks of category F2 and other pyrotechnic objects like firecrackers have traditionally been set off by private individuals on New Year's Eve in Germany. The category F2 includes small fireworks such as bangers and firecrackers that are not intended to be too dangerous, have a comparatively low noise level and are intended for the use in confined outdoor areas. Fireworks articles of category F2 may have a maximum noise level of 120 dB at a distance of 8 m . Sound pressure peaks above 150 dB can lead to a blast or explosion trauma, depending on the duration of exposure. If the sound pressure peaks are present for at least 2 ms, explosion trauma can occur. Consequences may include rupture of the tympanic membrane, possibly with dislocation and/or fracture of the auditory ossicles and hemorrhage into the tympanic mucosa. In case of blast and explosion trauma, also a hearing loss of varying severity results. For the turn of the years 2020/2021, as well as for 2021/2022, a nationwide ban on the release of fireworks of category F2 was declared in Germany. The reason for this was that, due to the COVID-19 pandemic, hospitals should not be burdened with further emergencies caused by fireworks-injured patients. Some federal states and cities established further individual rules for handling pyrotechnics: in the states of Baden-Württemberg and Bavaria, for example, fireworks and firecrackers were only permitted on private property due to the curfew at night. An analysis by a health insurance company showed that during the COVID-19 pandemic, the number of hospital admissions due to New Year's Eve accidents nationwide dropped from about 6.200 cases in 2019 to about 3.800 cases the following year . In addition to the ban on fireworks, there was an influence at the turns of the year during the COVID-19 pandemic due to various other measures as part of the lockdown. These included contact restrictions at private gatherings, the closure of nightclubs, the ban on fireworks in central locations and the curfew. Pyrotechnics can cause serious injuries, especially in the context of non-professional private use. Mainly reports and studies on injuries of the eyes and hands are published [ – ]. A study from the Netherlands reports that burns of the upper extremities and eye injuries are the most common injuries caused by pyrotechnics on New Year's Eve, especially among adults . Studies of ear, nose and throat (ENT) injuries from pyrotechnics on New Year's Eve are scarce in the literature to date. The aim of our monocentric retrospective study was to evaluate all emergency patients presenting to the ENT-Department of a University hospital in Southern Germany on New Year's Eve. The intention was not only to obtain an analysis of ENT injuries caused by fireworks on New Year's Eve, but also to determine the partly controversially discussed influence of the above-mentioned restrictions during the COVID-19 pandemic on emergency presentations in the ENT-Department.
A retrospective analysis of all patients who presented as emergencies at an ENT-University hospital department in Southern Germany was performed over a period of 16 turns of the year (2006–2022). All patients (no restriction regarding age and gender) who arrived between 4:00 pm on December 31st and 3:59 pm on January 1st were included. This time period was chosen by the authors to cover, as far as possible, the actual effects of New Year's Eve. Therefore, a 24-h period was chosen, starting in the afternoon. Outpatients were evaluated as well as patients who were admitted as inpatients after the emergency presentation. The presentations were divided into New Year's Eve-associated and New Year's Eve-unrelated. New Year's Eve-associated presentations were further categorized into the following three causes: Fireworks, violent altercation, and other New Year's Eve-associated trauma. The basis for the categorization was the medical documentation of the incident. Physical violence that occurred for example during New Year's Eve celebrations or while drinking alcohol was categorized as New Year's Eve associated. The clinical courses of all patients were further analyzed with respect to the need for imaging and/or surgical intervention. Other patient-specific characteristics such as age and gender were considered. Patients who had reached the age of 18 years were assigned as of legal age. In addition, a comparison was made between the 14 turns of the year with "usual" handling of pyrotechnics, as well as the two turns of the year under the ban on the transfer of fireworks of category F2 and the upper mentioned rules during the pandemic. The basis for the retrospective evaluation were the medical documentation of the electronic patient records of the department. The analysis of the data was mainly performed descriptively.
Patient characteristics and numbers of emergency presentations Of a total of 343 emergency presentations between 4:00 pm on December 31st and 3:59 pm on January 1st from 2006 to 2022, 69 of the patients presented for New Year's Eve-associated reasons (20%). 50 of the 69 patients were male (72%) and 19 were female (28%). The average age of the patients was 31.6 years (range 3–76 years). 11 of the 69 patients (15.9%; 9 male, 2 female) were underage. 51 of the 69 patients (74%) presented because of fireworks injuries and 13 patients presented because of violent altercations (19%). 5 patients (7%) presented due to other New Year's Eve-associated causes; these are listed in Table . In terms of absolute numbers, the 2009–2010 turn of the year was the one with the most New Year's Eve-associated presentations ( n = 10), and the 2020–2021 turn of the year, influenced by the COVID-19 pandemic, was the one with the fewest New Year's Eve-associated emergency presentations ( n = 0). At the turns of 2014–2015 and 2018–2019, the total number of emergency presentations was highest (n = 29), and from 2021 to 2022 (again under the influence of the COVID-19 pandemic) was the lowest total number of emergency patients ( n = 4). Table lists all absolute and relative numbers of emergency patients. Diagnosis and therapy A further analysis was performed with regard to the injuries sustained (Fig. ). Noise trauma was present in 49 of the 69 New Year's Eve-associated emergency patients (71%). A burn was present in one patient (1.5%). Tympanic membrane perforation was diagnosed in 7 of the 69 patients (10.1%). All diagnoses are listed in Table , where multiple diagnoses could be made in one patient. Surgical care had to be provided in 10 of the 69 New Year's Eve-associated emergency patients (14.5%). Surgical care was defined as any surgical intervention, both under local and general anesthesia. Interventions under local anesthesia included for example sutures and tympanic membrane splinting. An overview of the interventions is shown in Fig. . Inpatient admission was required in 4 of the 69 New Year's Eve-associated emergency patients (5.8%). Imaging was performed for New Year's Eve-associated injury in 7 of the 69 patients (10.1%), in all cases, CT scan was the method of choice. Influence of the COVID-19 pandemic A total of 68 patients (21.2%) presented for New Year's Eve-associated injuries at the 14 turns of the year 2006/2007 to 2019/2020 which were not yet impacted by regulations during the COVID-19 pandemic. The total number of emergency presentations during this period was 321. On average, this equates to 5 patients due to New Year's Eve-associated injuries annually for an average total of 23 emergency presentations between 4:00 pm on December 31st and 3:59 pm on January 1st. Both turns of the years 2020/2021 and 2021/2022 were during the COVID-19 pandemic, and thus affected by the measures mentioned in the introduction. There were no New Year's Eve-associated emergency presentations at the turn of 2020/2021 for a total of 18 patients presenting as emergencies during the period studied. The turn of the year 2021/2022 was characterized by the lowest of the total number of patients ( n = 4), with only one New Year's Eve-associated emergency presentation, due to noise trauma caused by a scare gun. Overall, there was one New Year's Eve-associated emergency presentation of 22 patients at the turns of the year during the COVID-19 pandemic during the period studied (4.5%) which averages to 0.5 New Year's Eve-associated emergency presentations per turn of the year for an annual average of 11 total emergency patients. Table summarizes the comparison.
Of a total of 343 emergency presentations between 4:00 pm on December 31st and 3:59 pm on January 1st from 2006 to 2022, 69 of the patients presented for New Year's Eve-associated reasons (20%). 50 of the 69 patients were male (72%) and 19 were female (28%). The average age of the patients was 31.6 years (range 3–76 years). 11 of the 69 patients (15.9%; 9 male, 2 female) were underage. 51 of the 69 patients (74%) presented because of fireworks injuries and 13 patients presented because of violent altercations (19%). 5 patients (7%) presented due to other New Year's Eve-associated causes; these are listed in Table . In terms of absolute numbers, the 2009–2010 turn of the year was the one with the most New Year's Eve-associated presentations ( n = 10), and the 2020–2021 turn of the year, influenced by the COVID-19 pandemic, was the one with the fewest New Year's Eve-associated emergency presentations ( n = 0). At the turns of 2014–2015 and 2018–2019, the total number of emergency presentations was highest (n = 29), and from 2021 to 2022 (again under the influence of the COVID-19 pandemic) was the lowest total number of emergency patients ( n = 4). Table lists all absolute and relative numbers of emergency patients.
A further analysis was performed with regard to the injuries sustained (Fig. ). Noise trauma was present in 49 of the 69 New Year's Eve-associated emergency patients (71%). A burn was present in one patient (1.5%). Tympanic membrane perforation was diagnosed in 7 of the 69 patients (10.1%). All diagnoses are listed in Table , where multiple diagnoses could be made in one patient. Surgical care had to be provided in 10 of the 69 New Year's Eve-associated emergency patients (14.5%). Surgical care was defined as any surgical intervention, both under local and general anesthesia. Interventions under local anesthesia included for example sutures and tympanic membrane splinting. An overview of the interventions is shown in Fig. . Inpatient admission was required in 4 of the 69 New Year's Eve-associated emergency patients (5.8%). Imaging was performed for New Year's Eve-associated injury in 7 of the 69 patients (10.1%), in all cases, CT scan was the method of choice.
A total of 68 patients (21.2%) presented for New Year's Eve-associated injuries at the 14 turns of the year 2006/2007 to 2019/2020 which were not yet impacted by regulations during the COVID-19 pandemic. The total number of emergency presentations during this period was 321. On average, this equates to 5 patients due to New Year's Eve-associated injuries annually for an average total of 23 emergency presentations between 4:00 pm on December 31st and 3:59 pm on January 1st. Both turns of the years 2020/2021 and 2021/2022 were during the COVID-19 pandemic, and thus affected by the measures mentioned in the introduction. There were no New Year's Eve-associated emergency presentations at the turn of 2020/2021 for a total of 18 patients presenting as emergencies during the period studied. The turn of the year 2021/2022 was characterized by the lowest of the total number of patients ( n = 4), with only one New Year's Eve-associated emergency presentation, due to noise trauma caused by a scare gun. Overall, there was one New Year's Eve-associated emergency presentation of 22 patients at the turns of the year during the COVID-19 pandemic during the period studied (4.5%) which averages to 0.5 New Year's Eve-associated emergency presentations per turn of the year for an annual average of 11 total emergency patients. Table summarizes the comparison.
Previous data on studies of New Year's Eve-associated injuries in ENT are scarce. There are a few publications on pyrotechnic injuries from general emergency departments and especially with a focus on Ophthalmology and Hand Surgery. Nevertheless, the present data show that otolaryngology is clearly affected by the effects of New Year's Eve behavior, particularly through acoustic trauma and midface fractures. In our study of a Southern German University ENT emergency Department, 50 of the 69 patients with New Year's Eve-associated injuries leading to presentation to the emergency department were male. This percentage of 72% is consistent with data from New Year's Eve analyses from other disciplines, which also report a dominant proportion of male patients (Department of Ophthalmology: 71.2%, systematic review from Ophthalmology: 75%—range: 66–95% ). The percentage of male patients in a study of hand injuries from fireworks was even higher at 96.6% . Similarly, the percentage of male patients in a retrospective evaluation of severe fireworks injuries that resulted in hospital admission or surgery was over 90% . The number of underaged patients in the present study at 15.9% was lower compared to studies from ophthalmology (Lenglinger 2021: 34.9% ). The most common cause of ENT injury was inner ear trauma from fireworks. The patterns of injury from pyrotechnics were wide-ranging, from tympanic membrane perforations to noise trauma. Burns were rare in the present study, with only one affected patient. In comparison, burns were the most common injury from fireworks at 48% ( n = 38) in a prospective multicenter study from the Netherlands which recruited from a specialized burn center in addition to general emergency departments . It should also be noted, in comparison to other studies, that the present evaluation did not include only fireworks-induced injuries. The rate of patients requiring surgical treatment (14.5%) was nevertheless comparable to other studies (van Yperen et al.: 20% ). One limitation of our study is that sound audiometry is not regularly recorded in emergency patients. Thus, only the presence of an acute noise trauma with mostly subjective hearing loss and in most of the cases the results of tuning fork tests (of Weber and Rinne) was documented. In most of the cases no statement could be made about the severity of the symptoms and the further clinical course. This is partly due to the fact that most patients returned to their local ENT specialist for further follow-up, which was not possible on December 31st and January 1st due to opening hours. Another point of discussion can be the analysis of the patients after alcohol consumption or physical confrontation in this context. It is obvious that this is not a circumstance that can be attributed to New Year’s Eve alone, but also occurs on “normal” weekends. Nevertheless, an increased alcohol consumption is certainly present on this night, so that an association with the festivities cannot be dismissed out of hand. One important topic to discuss in a monocentric analysis is the influence and availability of nearby emergency departments and ENT specialists. During the inclusion criteria time (between 4:00 pm on December 31st and 3:59 pm on January 1st), ENT emergency departments and/or ENT specialists are rarely available in a surrounding area of about 100 km radius except one other ENT department in the same town. This might explain the relatively small number of violence-induced injuries compared to symptoms or injuries of the inner ear, as the first-mentioned, like soft tissue injury, is also treated in some general emergency departments without ENT specialists. The treatment spectrum and availability of other involved disciplines such as Pediatrics or Oral and Maxillofacial Surgery also have a relevant influence and the local structures must be considered in this monocentric evaluation: The Pediatric Department is located next door, children with injuries of the head and neck are usually send to and treated in the ENT department. The Department of Oral and Maxillofacial Surgery with its emergency unit is located in the same town. Nasal bone fractures and fractures of the midface without involvement of the mandible or occlusion are frequently treated in the ENT department. In addition to the last-mentioned discussion points, it must be considered that an unknown number of patients with New Year's Eve-associated ORL injuries in the catchment area of the University hospital of the authors remains unrecognized in this study, as not all patients consult the emergency department on the same day and are, therefore, not described in this analysis. Strengths of the present study include the first evaluation of an ENT-specific emergency care setting on New Year's Eve and, in addition, an examination of the impact of restrictions under pandemic conditions. Since 14 turns of the year were evaluated before the influence of the COVID-19 pandemic and only two turns of the year were analyzed under the influence of the pandemic, a statistical comparison is not sufficiently possible. Nevertheless, the descriptive analysis shows that, among other things, due to the ban on the transfer of fireworks at the turn of the years 2020/2021 and 2021/2022, no emergency presentation was required due to an ENT injury caused by fireworks. Both the average number of New Year's Eve-associated emergency patients per year and the average total number of patients during the period studied were reduced by more than half compared with previous years. The results of a retrospective analysis from 2004 about serious eye and adnexal injuries from fireworks in Northern Ireland before and after the lifting of the firework ban confirmed, also, that the removal of the legislative ban on fireworks in 1996 has had a significant effect on the incidence of eye injuries . All these results should support the responsible use of fireworks even after the special restrictions of the COVID-19 pandemic have ended. This applies not only to the persons actively lighting the fireworks, but especially to bystanders and primarily minors. Evaluations from ophthalmology show that up to 60% of those injured were bystanders rather than the active igniters of the fireworks . Sandvall et al. found that firework shells and mortars disproportionately cause permanent impairment from eye and hand injury . Concerning firework of category F2 in general, a noise level of 120 dB at a distance of 8 m may not only indicate the aforementioned acute injuries, but also it can be assumed that in some of the patients evaluated here, long-term consequences remained, which include inner ear damage with reduction of the hearing threshold and tinnitus or permanent perforation of the eardrum.
The results show that New Year's Eve-associated ear, nose, and throat injuries have a wide spectrum from inner ear trauma to midface fractures. Long-term damage may include hearing loss and tinnitus. This study shall support the responsible use of fireworks even after the end of the special regulations of the COVID-19 pandemic.
|
Analysis of Three-Dimensional Bone Microarchitecture of the Axis Exposes Pronounced Regional Heterogeneity Associated with Clinical Fracture Patterns
|
c1cbd680-a051-4c2c-8220-a2beb1c0ba1b
|
10106346
|
Forensic Medicine[mh]
|
The second cervical vertebra, called the axis, contains a prominent odontoid process, the dens axis (DAX). The DAX is prone to fracture especially in elderly patients with compromised bone mineral density. In clinical practice, the Anderson and D’Alonzo fracture classification has become the most commonly used system for axis fractures . In short, type I fractures of the axis (DFTI) occur in the region of the tip of the DAX. Type II fractures (DFTII) extend through the base of the DAX, i.e. , the transition from DAX to the corpus axis (CAX). Fractures affecting the CAX are classified as type III fractures (DFTIII). The most common fracture type is the DFTII [ – ]. Especially in elderly patients, DFTII fractures frequently occur in the context of low-energy trauma such as falls from standing height . Due to the demographic change with rapidly increasing numbers of active persons of high age, axis fractures (often also termed odontoid fractures) play an increasingly important role in clinical practice and are associated with a high morbidity and mortality rate [ – ]. Although there is a consensus in the literature that axis fractures must be treated quickly to reduce morbidity and mortality and prevent further complications, there is little knowledge on whether conservative or surgical therapy should be preferred [ , – ]. Overall, surgical therapy appears to achieve higher rates of fracture healing for patients > 65 years of age . A commonly applied surgical concept is the anterior screw fixation according to Böhler et al. . However, particularly in patients with limited bone mineral density it is not uncommon for the inserted screws to loosen [ – ] resulting in persistent instability, causing pseudarthrosis with associated pain and risk of myelopathy with neurologic dysfunction . Typically, loosening of the screw takes place at the base of the DAX and the CAX. In this regard, the bone quality and stability resulting from the bone microarchitecture seem to be of paramount importance. In the past, few studies have been conducted to investigate the bone microarchitecture of the axis and to derive implications for the occurrence and treatment of fractures. In 1994, Amling et al. investigated the microarchitecture of the axis using histological sagittal sections from 22 cadaveric specimens. By determining the bone volume per tissue volume (BV/TV), the trabecular bone pattern factor, and the cortical thickness, a region of least resistance at the base of the DAX was identified in this study . In 1995, the same group also concluded, that there is an increased risk of fractures and subsequent non-unions in osteoporotic bone, based on a reduced trabecular bone mass and a reduced trabecular interconnection at the base of the DAX . However, these studies were based on the sole evaluation of the microarchitecture of the axis based on sagittal two-dimensional sections. Further three-dimensional analyses focused on the occurrence of the residual subdental synchondrosis [ – ]. Although other studies have also been performed to investigate the trabecular bone microarchitecture of the axis in three dimensions , a structured analysis in a representative study group has been lacking. Therefore, the current study aims to analyze the bone microarchitecture of the axis using high-resolution quantitative computed tomography (HR-pQCT), to find correlations in clinical specimens analyzed via classical computed tomography (CT), and to derive implications for the occurrence and care of fractures and treatment of the DAX.
Computed Tomography (CT) For initial clinical reference, CT scans from n = 20 (10 male and 10 female) individuals without fractures of the axis were retrospectively evaluated for apparent density of the axis expressed in Hounsfield units (HU). Only patients without fracture, tumors, or previous surgery in the upper cervical spine were included for analysis. In addition, patients with axis fractures who underwent surgery at our institution between 01/01/2014 and 12/31/2020 were retrospectively analyzed for fracture type. Seventy-eight patients could be identified. In 29 patients, preoperative CT scans of the cervical spine were available. Three patients were excluded due to the presence of spinal osseous metastases or cystic erosion of the DAX. Thus, the apparent density of n = 26 (12 male and 14 female) fractured axes was evaluated in preoperative CT scans. In all analyses, a dual-source SOMATOM Force system (Siemens Healthineers, Munich, Germany) or predecessor model was used. Sectioning was performed in all CT scans at a spatial resolution of 1 mm axial thickness and calibration was performed as part of daily routine clinical practice. Multiplanar reconstruction was performed in axial slices. Measurements were performed utilizing the software Centricity Universal Viewer (v6.0, GE Healthcare, Chicago, USA). To evaluate the apparent density corresponding to clinical fracture sites, three zones (zone I-III) were defined according to the fracture classification of Anderson and D’Alonzo (Fig. ). In detail, zone I was defined as the craniocaudal extent from the most cranial point of the DAX to the inferior border of the anterior arch of the atlas. Zone II extends caudally from this boundary to the transition of the DAX into the CAX. Zone III extends caudally from this boundary to the base plate of the axis. Measurements were performed in sagittal and axial planes. For this purpose, an elliptical region of interest with a diameter as large as possible within the cancellous bone was analyzed in each zone and plane, excluding the cortical bone. Collection of Specimens During Autopsy Human axis specimens ( n = 28; 14 male and 14 female) were obtained during autopsies and fresh-frozen until further analysis after harvesting. Approval to collect the samples was obtained from the relatives of the deceased. Age, sex, and weight of the individuals were documented. Only anatomically normal and unaltered axis specimens were included, while exclusion criteria included a history of fracture or surgery in the cervical spine or tumor disease. All specimens were anonymized before further analysis. This study was approved by the local ethics committee (2021-300002-WF). High Resolution Quantitative Computed Tomography (HR-pQCT) The fresh-frozen axes were subsequently scanned and analyzed using second-generation HR-pQCT (XTremeCT II, Scanco Medical, Brüttisellen, Switzerland). All samples were scanned using a tube voltage of 68 kV, a current of 1.47 mA and with a spatial resolution of 62 µm. Subsequent analysis of the reconstructed data was performed using the integrated software by Scanco Medical (v6.6). In more detail, the analysis of cortical and trabecular bone was performed for the whole analyzed volume (CC), as well as separately for zones I (tip), II (base), and III (CAX), following the classification according to Anderson and D’Alonzo. The craniocaudal extent of the cylinders corresponded to the above-mentioned limits of the respective zone (I–III). The trabecular bone microarchitecture of the axis was analyzed in a cylindrical volume of interest between the apex of the DAX and the bottom plate of the CAX resulting from the extension of the DAX caudally, based on fracture morphology and the normal screw position of a Böhler screw. Since the DAX has different diameters over its course, the maximum diameter of the DAX was determined at its base. To prevent the inclusion of cortical bone in the spongious cylinders, a diameter of 35% of the maximum diameter of the DAX was defined for the cylinder in zone I, and a diameter of 90% of the maximum diameter of the DAX was defined for the cylinders in zones II and III. The following parameters were determined: BV/TV, number of trabeculae (Tb.N, 1/mm), trabecular thickness (Tb.Th, mm), trabecular separation (Tb.Sp, mm), connectivity density (Conn.D, 1/mm 3 ), structure model index (SMI), degree of anisotropy (DA), volumetric (apparent) bone mineral density (vBMD, mg HA/cm 3 ), and tissue mineral density (TMD, mg HA/cm 3 ), which is defined as the mineral density of mineralized tissue. The latter two were evaluated in trabecular and cortical bone, separately. Further, cortical thickness (Ct.Th, mm) and porosity (Ct.Po, 1) were determined. Statistical Analysis Statistical analysis was performed using SPSS 27.0 (IBM Corp., Armonk, NY, USA). All data is presented as mean ± standard deviation. The Shapiro–Wilk test was used to test for normal distribution. To test for differences between zones Repeated Measures ANOVA (RM ANOVA) with a Geisser-Greenhouse correction in the absence of sphericity and subsequent Tukey post-hoc testing was used. To test for differences between the non-fracture and fracture group, the unpaired two-sided t-test was used. The influence of age and sex on the microstructural evaluation was analyzed with RM ANCOVA testing. The significance level was set at p < 0.05.
For initial clinical reference, CT scans from n = 20 (10 male and 10 female) individuals without fractures of the axis were retrospectively evaluated for apparent density of the axis expressed in Hounsfield units (HU). Only patients without fracture, tumors, or previous surgery in the upper cervical spine were included for analysis. In addition, patients with axis fractures who underwent surgery at our institution between 01/01/2014 and 12/31/2020 were retrospectively analyzed for fracture type. Seventy-eight patients could be identified. In 29 patients, preoperative CT scans of the cervical spine were available. Three patients were excluded due to the presence of spinal osseous metastases or cystic erosion of the DAX. Thus, the apparent density of n = 26 (12 male and 14 female) fractured axes was evaluated in preoperative CT scans. In all analyses, a dual-source SOMATOM Force system (Siemens Healthineers, Munich, Germany) or predecessor model was used. Sectioning was performed in all CT scans at a spatial resolution of 1 mm axial thickness and calibration was performed as part of daily routine clinical practice. Multiplanar reconstruction was performed in axial slices. Measurements were performed utilizing the software Centricity Universal Viewer (v6.0, GE Healthcare, Chicago, USA). To evaluate the apparent density corresponding to clinical fracture sites, three zones (zone I-III) were defined according to the fracture classification of Anderson and D’Alonzo (Fig. ). In detail, zone I was defined as the craniocaudal extent from the most cranial point of the DAX to the inferior border of the anterior arch of the atlas. Zone II extends caudally from this boundary to the transition of the DAX into the CAX. Zone III extends caudally from this boundary to the base plate of the axis. Measurements were performed in sagittal and axial planes. For this purpose, an elliptical region of interest with a diameter as large as possible within the cancellous bone was analyzed in each zone and plane, excluding the cortical bone.
Human axis specimens ( n = 28; 14 male and 14 female) were obtained during autopsies and fresh-frozen until further analysis after harvesting. Approval to collect the samples was obtained from the relatives of the deceased. Age, sex, and weight of the individuals were documented. Only anatomically normal and unaltered axis specimens were included, while exclusion criteria included a history of fracture or surgery in the cervical spine or tumor disease. All specimens were anonymized before further analysis. This study was approved by the local ethics committee (2021-300002-WF).
The fresh-frozen axes were subsequently scanned and analyzed using second-generation HR-pQCT (XTremeCT II, Scanco Medical, Brüttisellen, Switzerland). All samples were scanned using a tube voltage of 68 kV, a current of 1.47 mA and with a spatial resolution of 62 µm. Subsequent analysis of the reconstructed data was performed using the integrated software by Scanco Medical (v6.6). In more detail, the analysis of cortical and trabecular bone was performed for the whole analyzed volume (CC), as well as separately for zones I (tip), II (base), and III (CAX), following the classification according to Anderson and D’Alonzo. The craniocaudal extent of the cylinders corresponded to the above-mentioned limits of the respective zone (I–III). The trabecular bone microarchitecture of the axis was analyzed in a cylindrical volume of interest between the apex of the DAX and the bottom plate of the CAX resulting from the extension of the DAX caudally, based on fracture morphology and the normal screw position of a Böhler screw. Since the DAX has different diameters over its course, the maximum diameter of the DAX was determined at its base. To prevent the inclusion of cortical bone in the spongious cylinders, a diameter of 35% of the maximum diameter of the DAX was defined for the cylinder in zone I, and a diameter of 90% of the maximum diameter of the DAX was defined for the cylinders in zones II and III. The following parameters were determined: BV/TV, number of trabeculae (Tb.N, 1/mm), trabecular thickness (Tb.Th, mm), trabecular separation (Tb.Sp, mm), connectivity density (Conn.D, 1/mm 3 ), structure model index (SMI), degree of anisotropy (DA), volumetric (apparent) bone mineral density (vBMD, mg HA/cm 3 ), and tissue mineral density (TMD, mg HA/cm 3 ), which is defined as the mineral density of mineralized tissue. The latter two were evaluated in trabecular and cortical bone, separately. Further, cortical thickness (Ct.Th, mm) and porosity (Ct.Po, 1) were determined.
Statistical analysis was performed using SPSS 27.0 (IBM Corp., Armonk, NY, USA). All data is presented as mean ± standard deviation. The Shapiro–Wilk test was used to test for normal distribution. To test for differences between zones Repeated Measures ANOVA (RM ANOVA) with a Geisser-Greenhouse correction in the absence of sphericity and subsequent Tukey post-hoc testing was used. To test for differences between the non-fracture and fracture group, the unpaired two-sided t-test was used. The influence of age and sex on the microstructural evaluation was analyzed with RM ANCOVA testing. The significance level was set at p < 0.05.
Clinical Fracture Patterns and Apparent Density Assessed by Classical Computed Tomography (CT) Analysis of fracture types according to the Anderson and D’Alonzo classification in 78 patients treated surgically at our institution showed that 90% and 10% were DFTII fractures and DFTIII fractures, respectively (Fig. a). Notably, DFTII fractures also frequently exhibit osseous complications in terms of nonunion ( i.e. , pseudarthrosis) (Fig. b). In patients without axis fractures (mean age 81.6 ± 9.2 years), apparent density in zones II and III was lower compared to zone I (zone I vs . II p < 0.001; zone I vs. III p < 0.001; zone II vs. III p = 0.995) (Table ). In patients with axis fractures (mean age 81.9 ± 9.9 years), a similar pattern with higher density in zone I compared to zones II and III was detected (zone I vs. II p < 0.001; zone I vs. III p < 0.001; zone II vs. III p = 0.065). In addition, the sagittal measurement of zone II in the fracture collective showed a higher density than the sagittal measurement in zone III (zone II vs . III p = 0.031). No differences were detected between non-fractured and fractured individuals, neither within the whole dens nor each individual zone. Region-Specific Differences in Bone Microarchitecture via HR-pQCT A total of n = 28 (14 male and 14 female) human axis specimens was analyzed. The mean age at death was 80.8 ± 13.9 years. Compared with clinical cases, no significant difference could be identified with respect to age ( p = 0.738). Results of trabecular and cortical parameters obtained via HR-pQCT are presented in Table . Determinants of the cortical microarchitecture and mineralization are additionally presented in Fig. . Zone III presented with lower cortical TMD compared to zones I ( p < 0.001) and II ( p < 0.001) (Fig. b). Cortical thickness (Ct.Th) decreased from tip to corpus of the axis (zone I vs . II p = 0.03; zone I vs. III p < 0.001; zone II vs . III p < 0.001) (Fig. c). Similarly, Ct.Po decreased continuously from the tip to the corpus (zone I vs . II p < 0.001; zone I vs. III p < 0.001; zone II vs . III p = 0.032) (Fig. d). Analysis of the trabecular compartment (Fig. a) revealed lower TMD in zone III compared to the other two regions (zone I vs . II p = 0.410; zone I vs . III p < 0.001; zone II vs . III p < 0.001) (Fig. b). Moreover, a decrease in BV/TV was observed between zones I to III (zone I vs . II p < 0.001; zone I vs . III p < 0.001; zone II vs . III p < 0.001) (Fig. c). Analogously, Tb.Th decreased along zones I-III (zone I vs . II p < 0.001; zone I vs . III p < 0.001; zone II vs . III p < 0.001). While Tb.Sp was lower in zone I compared to both other zones (zone I vs . II p < 0.001; zone I vs. III p = 0.004; zone II vs . III p = 0.695), zone I showed the highest Tb.N compared to zone II and III (zone I vs. II p < 0.001; zone I vs . III p = 0.006; zone II vs . III p = 0.243). Conn.D in zone III was higher than in zone I and II (zone I vs . II p = 0.354; zone I vs . III p = 0.020; zone II vs . III p = 0.011). The mean SMI in zone I was negative and showed lower values than zone II and III (zone I vs. II p < 0.001; zone I vs . III p < 0.001; zone II vs . III p < 0.001), showing a plate-like, sclerotic configuration of trabeculae in the apex of the DAX, which changes to rod-like trabeculae caudally (Fig. d). DA did not vary between the three analyzed regions (zone I vs . II p = 0.169; zone I vs . III p = 0.334; zone II vs. III p = 0.734). Sex Differences Although sex differences in trabecular and cortical parameters were observed (Supplementary Fig. 1, 2), the effect of age and sex on region-specific trends was largely negligible (Supplementary Table 1, 2). More specifically, differences between zones remained unaffected; however, when females were evaluated separately, similar Conn.D in all zones was observed, unlike in males and the overall population. Similarly, Ct.Po only differed between zones in male specimens.
Analysis of fracture types according to the Anderson and D’Alonzo classification in 78 patients treated surgically at our institution showed that 90% and 10% were DFTII fractures and DFTIII fractures, respectively (Fig. a). Notably, DFTII fractures also frequently exhibit osseous complications in terms of nonunion ( i.e. , pseudarthrosis) (Fig. b). In patients without axis fractures (mean age 81.6 ± 9.2 years), apparent density in zones II and III was lower compared to zone I (zone I vs . II p < 0.001; zone I vs. III p < 0.001; zone II vs. III p = 0.995) (Table ). In patients with axis fractures (mean age 81.9 ± 9.9 years), a similar pattern with higher density in zone I compared to zones II and III was detected (zone I vs. II p < 0.001; zone I vs. III p < 0.001; zone II vs. III p = 0.065). In addition, the sagittal measurement of zone II in the fracture collective showed a higher density than the sagittal measurement in zone III (zone II vs . III p = 0.031). No differences were detected between non-fractured and fractured individuals, neither within the whole dens nor each individual zone.
HR-pQCT A total of n = 28 (14 male and 14 female) human axis specimens was analyzed. The mean age at death was 80.8 ± 13.9 years. Compared with clinical cases, no significant difference could be identified with respect to age ( p = 0.738). Results of trabecular and cortical parameters obtained via HR-pQCT are presented in Table . Determinants of the cortical microarchitecture and mineralization are additionally presented in Fig. . Zone III presented with lower cortical TMD compared to zones I ( p < 0.001) and II ( p < 0.001) (Fig. b). Cortical thickness (Ct.Th) decreased from tip to corpus of the axis (zone I vs . II p = 0.03; zone I vs. III p < 0.001; zone II vs . III p < 0.001) (Fig. c). Similarly, Ct.Po decreased continuously from the tip to the corpus (zone I vs . II p < 0.001; zone I vs. III p < 0.001; zone II vs . III p = 0.032) (Fig. d). Analysis of the trabecular compartment (Fig. a) revealed lower TMD in zone III compared to the other two regions (zone I vs . II p = 0.410; zone I vs . III p < 0.001; zone II vs . III p < 0.001) (Fig. b). Moreover, a decrease in BV/TV was observed between zones I to III (zone I vs . II p < 0.001; zone I vs . III p < 0.001; zone II vs . III p < 0.001) (Fig. c). Analogously, Tb.Th decreased along zones I-III (zone I vs . II p < 0.001; zone I vs . III p < 0.001; zone II vs . III p < 0.001). While Tb.Sp was lower in zone I compared to both other zones (zone I vs . II p < 0.001; zone I vs. III p = 0.004; zone II vs . III p = 0.695), zone I showed the highest Tb.N compared to zone II and III (zone I vs. II p < 0.001; zone I vs . III p = 0.006; zone II vs . III p = 0.243). Conn.D in zone III was higher than in zone I and II (zone I vs . II p = 0.354; zone I vs . III p = 0.020; zone II vs . III p = 0.011). The mean SMI in zone I was negative and showed lower values than zone II and III (zone I vs. II p < 0.001; zone I vs . III p < 0.001; zone II vs . III p < 0.001), showing a plate-like, sclerotic configuration of trabeculae in the apex of the DAX, which changes to rod-like trabeculae caudally (Fig. d). DA did not vary between the three analyzed regions (zone I vs . II p = 0.169; zone I vs . III p = 0.334; zone II vs. III p = 0.734).
Although sex differences in trabecular and cortical parameters were observed (Supplementary Fig. 1, 2), the effect of age and sex on region-specific trends was largely negligible (Supplementary Table 1, 2). More specifically, differences between zones remained unaffected; however, when females were evaluated separately, similar Conn.D in all zones was observed, unlike in males and the overall population. Similarly, Ct.Po only differed between zones in male specimens.
In the present study, the bone microarchitecture of the axis was analyzed using ex vivo HR-pQCT. The results were correlated with CT data from patients to derive implications for the occurrence and care of axis fractures. The major findings were: (1) CT-based apparent densities in zone I were higher than in zone II and III, mainly independent of fracture occurrence. (2) Cortical and trabecular microarchitecture parameters decreased from zone I in the tip of the DAX to zone III in the CAX, while trabecular separation was lowest and trabecular number highest at the apex accordingly. (3) The trabecular and cortical tissue mineral density was similar at the base and tip, while lowest at the CAX. (4) The SMI indicated a plate-like and dense, sclerotic trabecular microarchitecture in the tip of the DAX transforming into a highly cross-linked rod-like trabecular microarchitecture in the CAX. In clinical cases, we demonstrated lower apparent density in zone II and III than in zone I, irrespective of fracture status. In addition, the sagittal plane of zone II in the fracture group, but not the non-fracture group, showed a higher apparent density than the sagittal plane in zone III. This difference could be explained by the higher apparent density in the fracture area due to the occurrence of hematoma and the impaction of trabeculae. Although no differences in individual zones were detected between fractured and non-fractured patients, there was a pronounced regional heterogeneity in both the non-fracture and the fracture group with lower apparent density in zones II and III compared to zone I. This could be a correlate of reduced bone quality at the base of the DAX and in the CAX. As it is known that clinical CT scans provide only a rough estimate of BMD, since the apparent density in HU cannot be directly translated into BMD, the bone quality and microarchitecture of the axis was further analyzed experimentally using HR-pQCT on full bone specimens. To our best knowledge, this is the very first study to analyze the bone microarchitecture of the axis via HR-pQCT in a comparable sample size of n = 28, following a clinically relevant fracture classification and correlating the findings to clinical groups of patients with and without fractures of the axis. We suggest that the decreasing cortical and trabecular microarchitecture from the tip of the DAX to the CAX is a major factor influencing fracture susceptibility. Notably, similar analyses have been previously performed by our group in other skeletal regions such as the distal fibula, inferring fracture mechanisms based on regional heterogeneity in local bone microarchitecture . In 1994, Amling et al . analyzed the microarchitecture of the axis histologically in n = 22 autopsy specimens with a mean age of 50 years . For analysis, they also chose a division of the axis into three zones following the classification of Anderson and D'Alonzo. They found a BV/TV of 20% in the CAX, 10% in the base of the DAX, and 26% in the odontoid process. The BV/TV was significantly lower in the base of DAX compared to the other zones. Moreover, the trabecular pattern factor, a parameter that indirectly accounts for inter-trabecular connections by determining the relation of convex and concave surface patterns , was determined in these regions, with the worst trabecular connection detected at the base of the DAX. The group concluded the presence of a region of least resistance in the base of the DAX due to lower bone mass and weaker bone microarchitecture . In a subsequent study, Amling et al. provided additional data from n = 11 autopsy specimens with known osteoporosis and deduced an increased risk for fractures and subsequent non-unions of the dens in the osteoporotic bone due to impaired bone quality in the base of the DAX. Although providing important insights, these previous analyses were performed only two-dimensionally in the sagittal plane . Our data offer insights into the bone quality and microarchitecture in the corresponding regions of the axis three-dimensionally. From the current data, it can be concluded that in zone II, which corresponds to the base of the DAX, and in zone III in the CAX, which may include a residual subdental synchondrosis, the trabecular bone is weaker than in zone I in the apex of the DAX due to a lower bone mass with less and thinner trabeculae. Further, the differences in BMD indicated higher mineralization within the two cranial zones supporting this assumption. Regarding the microarchitecture of the trabecular bone, the SMI and the Conn.D indicated a dense, sclerotic, and plate-like structure at the tip of the dens, while the CAX presented with high interconnectivity. Previous studies have shown that local bone mass and microarchitecture have a major influence on the occurrence of fractures [ – ]. By analyzing bone specimens from lumbar vertebrae via image-guided failure assessment under microtomographic imaging, it was shown that a decrease in BV/TV is accompanied by an increased fracture probability and that especially the combination of low BV/TV and Conn.D indicates the weakest bone region. The authors concluded that the weakest region within a bone structure may be crucial for fracturing the whole complex and that the above mentioned structural parameters are crucial for identifying these weak regions . Furthermore, a positive correlation of a low Tb.N and a high Tb.Sp with the occurrence of vertebral fractures has been shown in the past . Although no biomechanical studies with combined consideration of high-resolution microarchitecture of the axis have been performed to date, previous observations suggest that microarchitecture plays a major role in influencing the occurrence of fractures. However, the occurrence of fractures of the axis must be considered in a more differentiated manner and cannot be attributed to a single aspect such as bone quality. Both anatomical variations, i.e. , bone structure, bone composition, joints, and ligaments, and trauma mechanism are decisive. A recent biomechanical study investigated the influence of bone density on the occurrence of axis fractures indicating an increased fracture risk with low BMD while the direction of the applied loading showed little influence . Nevertheless, especially the articular connection between the anterior arch of the atlas and the DAX seems likely to be relevant for the occurrence of the DFTII. While fractures of the axis in young patients often arise in the context of high-energy traumas, a typical trauma mechanism in geriatric patients is a frontal head impact causing a reclination in the upper cervical spine. It is likely that this anatomical feature and its resulting high mechanical moments accompanied by a low BMD in the base of the DAX determines the high fracture susceptibility for DFTII. Our data show that in the extension of the DAX caudally into the CAX, Ct.Th, BV/TV, and trabecular as well as cortical TMD continued to decrease, whereas Amling et al. postulated an increase in BV/TV . These differences might be due to methodological discrepancies ( i.e. , 2D vs. 3D analysis) and age differences of the groups investigated. Due to the obvious difficulty in obtaining samples, there are no studies that have investigated the trabecular bone of the axis using HR-pQCT, in a similarly large collective to date. Recently, Wang et al. analyzed n = 5 dry bone samples with a mean age of 52 years . Four volumes of interest (VOI) were analyzed within the trabecular bone of the DAX. VOI I was defined in the tip of the DAX, VOI II in the neck of the DAX, VOI III in the body of the DAX, and VOI IV in the base of the DAX. In this previous study, a higher bone mass, and thicker and more numerous trabeculae were determined in VOI I compared to VOI IV, whereas trabecular separation was lower in VOI I than in VOI IV . While these findings obtained in a small collective of n = 5 specimens are generally consistent with our data, an analysis of CAX was not performed in the study by Wang et al. . In our view, an additional analysis of CAX seems essential. The classification most commonly used in clinical practice to classify fractures of the axis, regardless of the gaps that certainly exist, is the classification according to Anderson and D'Alonzo . DFTII followed by DFTIII occur most frequently, whereas DFTI are very rare . While DFTII occur at the base of the DAX, DFTIII occur within the cylinder that results from an extension of the DAX caudally into the CAX. Furthermore, in this cylinder lies the screw trajectory for anterior screw fixation, frequently performed for the surgical treatment of a DFTII, according to Böhler et al.. For these reasons, we considered an analysis of the bone microarchitecture in zones I–III based on the classification of Anderson and D'Alonzo to be suitable, to derive clinical implications. The following clinical implications emerge from our and previous results: The base of the DAX is prone to fracture based on the combination of low bone mass, bone mineralization, and trabecular microarchitecture [ – , , ]. In addition, the subdental trabecular bone represents a biomechanical weak point within the axis. On the one hand, this may account for the frequent occurrence of DFTIII in addition to DFTII. On the other hand, our data provide a possible explanation as to why the typical cut-out of screws after anterior screw fixation often occurs in zones II and III, which was also supported by the data described by the CT-based apparent densities in the clinical part of the study. Therefore, we consider anterior screw fixation to bear a high risk for failure, especially in geriatric patients [ , , ]. However, due to the important advantages of the technique compared to alternative or non-surgical procedures, such as the low invasiveness and the preserved rotational ability between atlas and axis, we consider the establishment of alternative osteosynthesis procedures for the placement of lag screws in the DAX with a separate anchorage in the CAX, such as osteosynthesis plates, essential to increase the quality of care and safety of elderly patients with fractures of the axis. These aspects should be further investigated in additional studies. Limitations of our study include that only non-fractured axis specimens could be analyzed by HR-pQCT. Thus, no direct comparison of microarchitecture between non-fracture and fracture groups could be made, which should be performed in the future. Although full autopsy allowed us to exclude conditions that locally affect skeletal microarchitecture (e.g., tumors), the presence of osteoporosis could not be determined by established methods. Nevertheless, the autopsy specimens offered us a unique opportunity to perform the high-resolution HR-pQCT examination, which is usually limited to distal bones (radius, tibia) and cannot be performed on the cervical spine in the clinical setting. Another limitation is that HR-pQCT measurements of the specimens were performed with a resolution of nearly 15–30% (depending on the region) of the trabecular thickness. Hence, the partial volume effect might have an influence on the mineralization results of this study. However, additional calculation showed similar correlation coefficients between TMD and thickness in the thinnest and thickest structures, indicating a negligible influence ( p = 0.76). Another limitation of our study is that the analysis was limited to a geriatric patient population. The resulting narrow age range prevented a meaningful analysis of age-related bone loss, and thus the presumed influence of age on fracture risk or screw loosening. In conclusion, the axis is characterized by a decreasing cortical and trabecular microarchitecture from the tip of the DAX to the CAX, indicating inferior bone quality in this region. These findings may partly explain the clinical observation that zones II and III, the base of the DAX and in the CAX, depicture sites with a higher fracture susceptibility than zone I, the tip of the DAX. While non-bony anatomical structures and trauma mechanism certainly also play a critical role regarding fracture occurrence, the reduced bone quality in zones II and III could be a risk factor for implant loosening after anterior screw fixation. In order to ensure safe osteosynthesis of axis fractures, improved, additional anchorage of the implants in zones II or III, e.g., by osteosynthesis plates, might be indicated, especially in aged patients with limited bone status.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 1770 kb)
|
Effects of phosphogypsum on enzyme activity and microbial community in acid soil
|
66ded2ad-61ef-4377-9b41-069ae0a99cd5
|
10106453
|
Microbiology[mh]
|
Phosphogypsum (PG) is a solid waste produced by sulfuric acid decomposing phosphate rock to produce phosphoric acid. The main component is calcium sulfate and contains some harmful impurities such as soluble phosphorus, fluorine and heavy metals. Therefore, PG has a high environmental risk. At present, the world produces about 300 million tons of PG every year, including about 70 million tons in China , , but the recycling rate of PG is only about 30%, and a large amount of PG is still mainly treated by stacking or landfill . Therefore, the research on PG utilization has always been a hot spot in scientific research . As a soil conditioner, PG is one of the ways of its resource utilization . PG can not only be used to improve saline-alkali land , but also acid soil . Soil acidity affects agricultural development in extensive areas around the world. Low fertility, Al 3+ toxicity and Ca 2+ deficiency are considered as the key influence factors of acid soils. PG can supplement calcium, sulfur and phosphorus in acid soil. In addition, the exchangeable Al 3+ can be replaced by the cation from PG; then the Al 3+ combines with the anion from PG to form AlSO 4 + , AlF 2 + , and AlF 3 , thus decreasing Al 3+ toxicity – . However, some elements (phosphorus, fluorine and heavy metals) from PG may cause environmental risks through accumulation in soil and crops, as these elements exceed the allowable environmental value. Therefore, attention should be paid to the impact on soil ecology when applying PG in the agricultural field. At present, a large number of scholars have done a lot of research work in the application of PG as a soil conditioner, which mainly focuses on the application effects of soil physical and chemical properties and plant growth, but the research on the impact of PG on soil microbial ecology is not very sufficient. Soil microorganisms are an important part of the soil ecosystem. They directly participate in the decomposition of soil organic matter, humus synthesis, nutrient transformation and promote soil formation and development. Soil enzymes are important participants in the metabolic process of soil ecosystem. Soil biochemical reactions are closely related to enzyme catalysis. In addition, Soil microbial community is extremely sensitive to soil environment changing, which might result in a dramatic effect on ecosystem functions. Meanwhile, Microorganisms also play a vital role in soil formation and quality, promoting plant growth and as plant pathogens. Microbial communities are actively associated with various pollutants transformation and degradation, which play a crucial role in the remediation processes. Guizhou in China is located in a subtropical monsoon humid climate zone, with an annual precipitation of 1060–1200 mm and more soil acidification. In addition, China's Guizhou has great pressure on the healthy development of Guizhou phosphorus chemical enterprises and the surrounding ecological environment due to discharges about 11 million tons of PG every year. Therefore, in order to solve the problems of PG stockpiling and soil acidification, we tried to use PG to improve acid soil, based on a lot of research work done by many scholars on the application of PG as an acid soil conditioner. However, in PG-applied soils, there is insufficient research on how microbial communities respond to the environmental changes and how dominant soil microbes affect the ecosystem function. Therefore, it is necessary to further evaluate the impact of PG on soil microbial ecology. The aim of this study is to: (1) evaluate the effect of PG on soil respiration and enzyme activity; (2) describe the structure and diversity of soil bacterial communities under different PG doses; (3) Explore the environmental factors that form soil bacterial communities. The research results not only provide guidance for the management of PG in this area, but also provide a reference for PG to improve acid soil.
Collection of PG and soil samples The gray PG powder was obtained from a factory in Guizhou, China. The main component of PG was dihydrate gypsum (CaSO 4 ·2H 2 O). It presented acidity as it contained a small amount of free phosphoric acid, soluble fluoride and sulfate (Table ). Furthermore, there is a little organic matter and heavy metals in PG (Tables , ). The pH was 3.20. The raw PG was dried in the oven at 40 °C for 8 h to remove the free water, then ground and sieved through 2 mm mesh, and sealed for storage. The soil was collected from the acid soil in Guiyang, Guizhou, China. Following the five-point method, the sampling soil was sourced from the top 0-20 cm of the tillage layer. The acid soil sample was air-dried, weeds-removed, ground through a 2 mm sieve, thoroughly blended and sealed for storage. Experimental design The results of previous studies showed that the amount of PG in the range of 0 ~ 10% was beneficial to improve the physical and chemical properties of acid soil – . Hence, five PG-treated soil samples, Control (no PG), P1 (0.01% PG), P2 (0.1% PG), P3 (1% PG) and P4 (10% PG) were designed in the experiment. 200 g dry weight of soil and PG mixture was prepared in a 500 mL box for each treatment. It was then adjusted with distilled water to 40% moisture content and covered with a lid. The lid was pierced with 4 small holes to ensure air circulation in the box. Every treatment group had 3 repetitions. All the sample boxes were placed in an artificial climate incubator for laboratory cultivation. The incubator was maintained at 25 °C and a humidity of 80%. During the incubation, water was added to the boxes every 10 days to keep the soil moisture content constant. After 90 days of incubation, it was taken out from the artificial climate incubator and divided into three parts. The first part was naturally air-dried to determine soil physical and chemical properties; the second part was stored in a refrigerator at 4 °C for the determination of soil enzyme activity; The third part was stored in a − 30 °C refrigerator for high-throughput sequencing analysis of soil bacteria and fungi. Determination of soil physico-chemical properties, soil respiration, soil enzyme activity Determination of soil physico-chemical properties: The soil active acidity value (pH (H 2 O)) was measured by pH meter (soil: water = 1:2.5); the soil was treated with 1 mol/L KCl (1:2.5 soil: water), and then the soil potential acidity (pH (KCl)) was measured by pH meter. Soil electrical conductivity (EC) was measured by an electrical conductivity meter (1:5 soil: water). Soil organic matter (SOM) was determined by potassium dichromate titration (NY/T 1121.6–2006). After the soil samples were discomposed by perchloric acid and sulfuric acid, the total phosphorus (TP) in soil was determined by molybdenum antimony anti-spectrophotometry. Available phosphorus (AP) was determined by hydrochloric acid ammonium fluoride molybdenum antimony anti-spectrophotometry (NY/T 1121.7-2014). Ammonium nitrogen (NH 4 + -N) was determined by KCl extraction indophenol blue colorimetry. Nitrate nitrogen (NO 3 – N) was determined by dual-wavelength UV colorimetry. Ca 2+ was determined by atomic absorption spectrometry. Sulfate (SO 4 2− ) was determined by Barium Sulfate Turbidimetry. Water-soluble fluorine ion (F − ) was determined by the ion-selective electrode method. Each sample was set with 3 repetitions. Determination of soil respiration: When the soil was treated with PG for 90 days, 50.00 g of fresh soil was weighed into a 500 mL jar with evenly spread on the bottom. 10 mL of 1 mol/L NaOH solution was placed into a 25 mL small beaker in the jar to capture the carbon dioxide (CO 2 ) generated by soil respiration, and the jar was sealed to incubate at 28 °C for 24 h. Then the beaker containing NaOH solution was added 10 mL 1.0 mol/L BaCl 2 solution and 2 drops phenolphthalein indicator to titrate with 0.500 mol/l HCl standard solution to the end point (pink turns colorless). The carbon dioxide content was calculated according to the amount of HCl. Each sample was set with 3 repetitions, and a blank control was used at the same time. Determination of soil enzyme activity: Soil urease activity was measured by phenol sodium colorimetry, expressed in mg of NH 3 -N in 1 g soil after 24 h . Invertase activity was measured by 3,5-dinitrosalicylic acid colorimetry and expressed in mg of glucose produced by 1 g dry soil in 24 h . Catalase was expressed by the volume (mL) of 0.1 mol/L KMnO 4 consumed by 1 g of dry soil in 1 h . Phosphatase activity was determined by the mass (μg) of phenol released from 1 g soil after 24 h using sodium diphenyl phosphate colorimetry . Extraction and sequencing of total DNA from soil and analysis of microbial community Microbial DNA from soil samples was extracted using FastDNA spin kit for soil (MP biomedicals, Santa Ana, CA). The quantity and quality of extracted DNA were measured by ultra micro spectrophotometer (NanoDrop 2000, Thermo Scientific, USA) and a gel electrophoresis instrument (DYY-6C, Beijing Liuyi, China). PCR amplification was performed on the V3–V4 region of the bacterial 16S rRNA gene with the primer pair 338F (5'-ACTCCTACGGGAGGCAGCA-3') and 806R (5'-GGACTACHVGGGTWTCTAAT-3'). The fungal ITS-rRNA gene was amplified by PCR. The primer pair was ITS5F (5'-GGAAGTAAAAGTCGTAACAAGG′-3') and ITS1R (5'-GCTGCGTTCTTCATCGATGC-3'). The 25 μL reaction system for the amplification process contained 5 μL of 5 × reaction buffer, 5 μL of 5 × GC buffer, 2 μL (2.5 mM) dNTPs, 1 μL (10 μM) of forward primer, 1 μL (10 μM) of reverse primer, 2 μL of DNA template, 8.75 μL of distilled water, and 0.25 μL of Q5 DNA polymerase. The PCR amplification program included the initial denaturation at 98 °C for 2 min, denaturation at 98 °C for 15 s, annealing at 55 °C for 30 s and extension at 72 °C for 30 s. The cycle was repeated 25–30 times followed by a final extension at 72 °C for 5 min. The resulting PCR product was stored at 10 °C. The PCR amplicon was purified by Agencourt AMPure Beads (Beckman Coulter, Indianapolis, IN, USA) reagent. The quantity of PCR amplicon was determined by PicoGreen dsDNA Assay Kit (Invitrogen, Carlsbad, CA, USA). The same amount of mixed amplicons was sequenced in Shanghai Personal Biotechnology Co., Ltd. The 2 × 300 paired-end sequencing was performed on the Illumina Novaseq-PE250 platform using the MiSeq Reagent Kit v3. Data analysis Microsoft Excel 2010 was used for statistics and calculation, SPSS version 26.0 was used for significance and correlation analysis, and Origin 2021b software was used for graphing. One-way ANOVA and Duncan multiple comparisons were used to test the difference significance between different treatment groups (P < 0.05). Pearson correlation coefficient was used for correlation analysis. The free online platform GenesCloud ( https://www.genescloud.cn ) was used to analyse the high throughput sequencing data and visualise the OTUs of each sample.
The gray PG powder was obtained from a factory in Guizhou, China. The main component of PG was dihydrate gypsum (CaSO 4 ·2H 2 O). It presented acidity as it contained a small amount of free phosphoric acid, soluble fluoride and sulfate (Table ). Furthermore, there is a little organic matter and heavy metals in PG (Tables , ). The pH was 3.20. The raw PG was dried in the oven at 40 °C for 8 h to remove the free water, then ground and sieved through 2 mm mesh, and sealed for storage. The soil was collected from the acid soil in Guiyang, Guizhou, China. Following the five-point method, the sampling soil was sourced from the top 0-20 cm of the tillage layer. The acid soil sample was air-dried, weeds-removed, ground through a 2 mm sieve, thoroughly blended and sealed for storage.
The results of previous studies showed that the amount of PG in the range of 0 ~ 10% was beneficial to improve the physical and chemical properties of acid soil – . Hence, five PG-treated soil samples, Control (no PG), P1 (0.01% PG), P2 (0.1% PG), P3 (1% PG) and P4 (10% PG) were designed in the experiment. 200 g dry weight of soil and PG mixture was prepared in a 500 mL box for each treatment. It was then adjusted with distilled water to 40% moisture content and covered with a lid. The lid was pierced with 4 small holes to ensure air circulation in the box. Every treatment group had 3 repetitions. All the sample boxes were placed in an artificial climate incubator for laboratory cultivation. The incubator was maintained at 25 °C and a humidity of 80%. During the incubation, water was added to the boxes every 10 days to keep the soil moisture content constant. After 90 days of incubation, it was taken out from the artificial climate incubator and divided into three parts. The first part was naturally air-dried to determine soil physical and chemical properties; the second part was stored in a refrigerator at 4 °C for the determination of soil enzyme activity; The third part was stored in a − 30 °C refrigerator for high-throughput sequencing analysis of soil bacteria and fungi.
Determination of soil physico-chemical properties: The soil active acidity value (pH (H 2 O)) was measured by pH meter (soil: water = 1:2.5); the soil was treated with 1 mol/L KCl (1:2.5 soil: water), and then the soil potential acidity (pH (KCl)) was measured by pH meter. Soil electrical conductivity (EC) was measured by an electrical conductivity meter (1:5 soil: water). Soil organic matter (SOM) was determined by potassium dichromate titration (NY/T 1121.6–2006). After the soil samples were discomposed by perchloric acid and sulfuric acid, the total phosphorus (TP) in soil was determined by molybdenum antimony anti-spectrophotometry. Available phosphorus (AP) was determined by hydrochloric acid ammonium fluoride molybdenum antimony anti-spectrophotometry (NY/T 1121.7-2014). Ammonium nitrogen (NH 4 + -N) was determined by KCl extraction indophenol blue colorimetry. Nitrate nitrogen (NO 3 – N) was determined by dual-wavelength UV colorimetry. Ca 2+ was determined by atomic absorption spectrometry. Sulfate (SO 4 2− ) was determined by Barium Sulfate Turbidimetry. Water-soluble fluorine ion (F − ) was determined by the ion-selective electrode method. Each sample was set with 3 repetitions. Determination of soil respiration: When the soil was treated with PG for 90 days, 50.00 g of fresh soil was weighed into a 500 mL jar with evenly spread on the bottom. 10 mL of 1 mol/L NaOH solution was placed into a 25 mL small beaker in the jar to capture the carbon dioxide (CO 2 ) generated by soil respiration, and the jar was sealed to incubate at 28 °C for 24 h. Then the beaker containing NaOH solution was added 10 mL 1.0 mol/L BaCl 2 solution and 2 drops phenolphthalein indicator to titrate with 0.500 mol/l HCl standard solution to the end point (pink turns colorless). The carbon dioxide content was calculated according to the amount of HCl. Each sample was set with 3 repetitions, and a blank control was used at the same time. Determination of soil enzyme activity: Soil urease activity was measured by phenol sodium colorimetry, expressed in mg of NH 3 -N in 1 g soil after 24 h . Invertase activity was measured by 3,5-dinitrosalicylic acid colorimetry and expressed in mg of glucose produced by 1 g dry soil in 24 h . Catalase was expressed by the volume (mL) of 0.1 mol/L KMnO 4 consumed by 1 g of dry soil in 1 h . Phosphatase activity was determined by the mass (μg) of phenol released from 1 g soil after 24 h using sodium diphenyl phosphate colorimetry .
Microbial DNA from soil samples was extracted using FastDNA spin kit for soil (MP biomedicals, Santa Ana, CA). The quantity and quality of extracted DNA were measured by ultra micro spectrophotometer (NanoDrop 2000, Thermo Scientific, USA) and a gel electrophoresis instrument (DYY-6C, Beijing Liuyi, China). PCR amplification was performed on the V3–V4 region of the bacterial 16S rRNA gene with the primer pair 338F (5'-ACTCCTACGGGAGGCAGCA-3') and 806R (5'-GGACTACHVGGGTWTCTAAT-3'). The fungal ITS-rRNA gene was amplified by PCR. The primer pair was ITS5F (5'-GGAAGTAAAAGTCGTAACAAGG′-3') and ITS1R (5'-GCTGCGTTCTTCATCGATGC-3'). The 25 μL reaction system for the amplification process contained 5 μL of 5 × reaction buffer, 5 μL of 5 × GC buffer, 2 μL (2.5 mM) dNTPs, 1 μL (10 μM) of forward primer, 1 μL (10 μM) of reverse primer, 2 μL of DNA template, 8.75 μL of distilled water, and 0.25 μL of Q5 DNA polymerase. The PCR amplification program included the initial denaturation at 98 °C for 2 min, denaturation at 98 °C for 15 s, annealing at 55 °C for 30 s and extension at 72 °C for 30 s. The cycle was repeated 25–30 times followed by a final extension at 72 °C for 5 min. The resulting PCR product was stored at 10 °C. The PCR amplicon was purified by Agencourt AMPure Beads (Beckman Coulter, Indianapolis, IN, USA) reagent. The quantity of PCR amplicon was determined by PicoGreen dsDNA Assay Kit (Invitrogen, Carlsbad, CA, USA). The same amount of mixed amplicons was sequenced in Shanghai Personal Biotechnology Co., Ltd. The 2 × 300 paired-end sequencing was performed on the Illumina Novaseq-PE250 platform using the MiSeq Reagent Kit v3.
Microsoft Excel 2010 was used for statistics and calculation, SPSS version 26.0 was used for significance and correlation analysis, and Origin 2021b software was used for graphing. One-way ANOVA and Duncan multiple comparisons were used to test the difference significance between different treatment groups (P < 0.05). Pearson correlation coefficient was used for correlation analysis. The free online platform GenesCloud ( https://www.genescloud.cn ) was used to analyse the high throughput sequencing data and visualise the OTUs of each sample.
Effect of PG on soil physical and chemical properties The application of PG significantly changed the physical and chemical properties of acidic soil (Table ). Almost all of the soil properties changed significantly at given PG applications. For example, for P2 (PG = 0.1%), the content of AP, TP, NH 4 + -N, NO 3 – N, Ca 2+ , SO 4 2− and EC increase significantly. For P3 (PG = 1%) and P4 (PG = 10%), in addition to the above soil components and EC, pH (KCl) and water-soluble F − also significantly increased; meanwhile, pH (H 2 O) and soil organic matter (SOM) content were significantly reduced. Effect of PG application on soil respiration The doses of PG treatment affected the soil respiration rate (Fig. ). After PG was applied, there was no significant difference in soil respiration rates between the control, P1 and P2. But the soil respiration rates of P3 and P4 decreased by 6.07% and 11.65% respectively, compared to the control. Effect of PG application on soil enzyme activity The doses of PG treatment affected soil enzyme activity (Fig. ). Catalase activity decreased significantly with the increase of PG application (except P1) (Fig. A). The application of 0.01% or 0.1% PG increased significantly urease activity (P1 and P2), and 1% or 10% PG inhibited significantly urease activity (P3 and P4) (Fig. D). Compared with the control, there was no significant difference in soil phosphatase and invertase (P1, P2 and P3) when 0.01% ~ 1% PG was applied. When 10% PG was applied, soil phosphatase activity decreased prominently 21.75%, and soil invertase activity increased by 17.26% (Fig. B,C). The correlation analysis between soil enzyme activity and soil properties is shown in Fig. . Catalase had a positive correlation with SOM (P < 0.01), a negative correlation with PG, F − , SO 4 2+ , Ca 2+ , NO 3 – N, NH 4 + -N, TP, AP, EC and pH (KCl) (P < 0.01), and no significant correlation with pH (H 2 O) (P > 0.05). Invertase was positively correlated with F − , NH 4 + -N and AP (P < 0.05), positively correlated with PG, SO 4 2+ , Ca 2+ , TP, EC and pH (KCl) (P < 0.01), negatively correlated with SOM (P < 0.01), but not significantly correlated with NO 3 – N and pH (H 2 O) (P > 0.05). Phosphatase was merely negatively correlated with pH (H 2 O) (P < 0.05). Urease was positively correlated with SOM (P < 0.01), negatively correlated with PG, TP, AP and EC (P < 0.05), negatively correlated with F − , SO 4 2+ , Ca 2+ , NH 4 + -N and pH (KCl) (P < 0.01), and not significantly correlated with NO 3– N and pH (H 2 O) (P > 0.05). Effects of PG on soil bacterial and fungal communities Community composition of soil bacteria and fungi With the increase of PG content, although the compositions of dominant bacterial taxa in soil at the phyla level are similar, the relative abundance of some phyla is different (Fig. A and Table ). The dominant bacterial phyla (> 5% relative abundance) in the control and PG-treated soil are Actinobacteria , Proteobacteria , Chloroflexi and Acidobacteria , with relative abundances of 25.6–34.6%, 22.2–42.9%, 7.1–16.8% and 11.1–14.5% respectively (Fig. A and Table ). Among the four dominant bacterial phyla, P4 (10% PG) reduced the abundance of Actinobacteria , Acidobacteria and Chloroflexi by up to 26.01%, 57.74% and 20.72% respectively and also increased the relative abundance of Proteobacteria by 93.24% compared to the control group. With the increase of PG content, although the compositions of dominant fungal phyla in soil are similar, the relative abundance of some fungal phyla is altered, and the P4 treatment has the greatest impact (Fig. B and Table ). The dominant fungal phyla (> 5% relative abundance) in PG-treated soil are Ascomycota (30.54–41.53%), Mortierellomycota (25.05–48.36%) and Basidiomycota (17.32–27.08%), altogether making a total relative abundance of 89.07–96.63% (Fig. B and Table ). Compared with the control, 10% PG treatment significantly reduced the relative abundance of Ascomycota and Basidiomycota communities, but it significantly enhanced the relative abundance of Mortierellomycota . Soil microbial alpha diversity analysis Microbial alpha diversity refers to the abundance, diversity and evenness of microbial species in a specific area. Different alpha diversity indexes reflect different information of microorganisms. In this experiment, Chao1, Shannon and Pielou's indexes were selected to reflect the abundance, diversity and evenness of the microbial community. The experiment showed that the doses of PG treatment affected the abundance, diversity or evenness of soil bacterial and fungal communities (Table ). When 1% PG was applied (P3), the soil bacterial abundance, diversity and evenness decreased significantly by 11.57%, 3.25% and 2.33%, and the soil fungal diversity decreased significantly by 9.97%. When 10% PG was applied (P4), the soil bacterial abundance, diversity and evenness decreased significantly by 30.07%, 9.16% and 4.65%, and the soil fungal abundance, diversity, and evenness decreased significantly by 29.71%, 19.77%, and 14.93% respectively. Beta diversity analysis of soil microorganisms Principal coordinate analysis (PCoA) based on Bray Curtis distance showed that there were significant differences between treatment groups in both soil bacterial (Fig. A) and fungal (Fig. B) communities. In Fig. A, ANOSIM (r = 0.881; P = 0.001) and Adonis (R 2 = 0.666; P = 0.001) confirmed that the differences of bacterial communities between groups were larger than within-taxa. The distance between sample points showed that the bacterial communities of the control, P1 and P2 were clustered, while those of P3 and P4 were clustered respectively, suggesting 1–10% PG affected the bacterial community structure more significantly than the lower amount of PG (0.01–0.1%). In order to determine specific bacterial taxa enriched within the different treatments, LEfSe from the phylum to genus level was performed (Fig. ). For example, the phyla Actinobacteria and Chloroflexi in Control; the phyla Elusimicrobia and Acidobacteria in P1 treatment; the phylum of Patescibacteria in P3 treatment; the phyla of Bacteroidetes , Firmicutes , Proteobacteria and Verrucomicrobia in P4 treatment were all significantly enriched. The above information only includes microbial community analysis at the phylum level; Fig. shows the bacterial communities enriched significantly at different taxonomic levels for each treatment. In Fig. B, ANOSIM (r = 0.388; P = 0.006) and Adonis (R 2 = 0.397; P = 0.003) indicated that the differences of fungal communities were also higher between groups than within-group. Fungal communities of control, P1, P2 and P3 treatment taxa were clustered, while the cluster of P4 was isolated. This showed that the 10% PG led to a acute change in the fungal community structure. According to LEfSe analysis (Fig. ), the family Geminibasidiaceae in Control; the family Agaricaceae in P1 treatment; the families Diaporthaceae and Atheliaceae in P2 treatment; and the family of Sclerodermataceae in P3 treatment was enriched. No fungal taxa were significantly enriched at the family level in P4 treatment. The fungal community data onto each treatment of any clade is shown in Fig. . In all, PG doses significantly changed the community composition of bacteria and fungi in acidic soil. The differences were more pronounced when the highest PG dose was used. In addition, the bacterial community was more sensitive to the application of PG than the fungal community. Relationship between soil microbial community structure and environmental factors The RDA results showed that (Fig. A and Table ), the first and second ranking axes could explain 89.89% and 2.26% of the variation respectively in the bacterial composition. The soil bacterial composition was synergistically influenced by soil EC, Ca 2+ , pH (KCl), F − , NH 4+ -N, among which EC and Ca 2+ were the main driving factors. Figure B and Table showed that the first and second ranking axes explain 28.62% and 14.6% of the variation for the fungal composition. Moreover, for the soil fungal composition, the driven factors were F-, NH 4+ -N, pH (KCl), Ca 2+ and EC, in which F − and NH 4+ -N were the dominant factors.
The application of PG significantly changed the physical and chemical properties of acidic soil (Table ). Almost all of the soil properties changed significantly at given PG applications. For example, for P2 (PG = 0.1%), the content of AP, TP, NH 4 + -N, NO 3 – N, Ca 2+ , SO 4 2− and EC increase significantly. For P3 (PG = 1%) and P4 (PG = 10%), in addition to the above soil components and EC, pH (KCl) and water-soluble F − also significantly increased; meanwhile, pH (H 2 O) and soil organic matter (SOM) content were significantly reduced.
The doses of PG treatment affected the soil respiration rate (Fig. ). After PG was applied, there was no significant difference in soil respiration rates between the control, P1 and P2. But the soil respiration rates of P3 and P4 decreased by 6.07% and 11.65% respectively, compared to the control.
The doses of PG treatment affected soil enzyme activity (Fig. ). Catalase activity decreased significantly with the increase of PG application (except P1) (Fig. A). The application of 0.01% or 0.1% PG increased significantly urease activity (P1 and P2), and 1% or 10% PG inhibited significantly urease activity (P3 and P4) (Fig. D). Compared with the control, there was no significant difference in soil phosphatase and invertase (P1, P2 and P3) when 0.01% ~ 1% PG was applied. When 10% PG was applied, soil phosphatase activity decreased prominently 21.75%, and soil invertase activity increased by 17.26% (Fig. B,C). The correlation analysis between soil enzyme activity and soil properties is shown in Fig. . Catalase had a positive correlation with SOM (P < 0.01), a negative correlation with PG, F − , SO 4 2+ , Ca 2+ , NO 3 – N, NH 4 + -N, TP, AP, EC and pH (KCl) (P < 0.01), and no significant correlation with pH (H 2 O) (P > 0.05). Invertase was positively correlated with F − , NH 4 + -N and AP (P < 0.05), positively correlated with PG, SO 4 2+ , Ca 2+ , TP, EC and pH (KCl) (P < 0.01), negatively correlated with SOM (P < 0.01), but not significantly correlated with NO 3 – N and pH (H 2 O) (P > 0.05). Phosphatase was merely negatively correlated with pH (H 2 O) (P < 0.05). Urease was positively correlated with SOM (P < 0.01), negatively correlated with PG, TP, AP and EC (P < 0.05), negatively correlated with F − , SO 4 2+ , Ca 2+ , NH 4 + -N and pH (KCl) (P < 0.01), and not significantly correlated with NO 3– N and pH (H 2 O) (P > 0.05).
Community composition of soil bacteria and fungi With the increase of PG content, although the compositions of dominant bacterial taxa in soil at the phyla level are similar, the relative abundance of some phyla is different (Fig. A and Table ). The dominant bacterial phyla (> 5% relative abundance) in the control and PG-treated soil are Actinobacteria , Proteobacteria , Chloroflexi and Acidobacteria , with relative abundances of 25.6–34.6%, 22.2–42.9%, 7.1–16.8% and 11.1–14.5% respectively (Fig. A and Table ). Among the four dominant bacterial phyla, P4 (10% PG) reduced the abundance of Actinobacteria , Acidobacteria and Chloroflexi by up to 26.01%, 57.74% and 20.72% respectively and also increased the relative abundance of Proteobacteria by 93.24% compared to the control group. With the increase of PG content, although the compositions of dominant fungal phyla in soil are similar, the relative abundance of some fungal phyla is altered, and the P4 treatment has the greatest impact (Fig. B and Table ). The dominant fungal phyla (> 5% relative abundance) in PG-treated soil are Ascomycota (30.54–41.53%), Mortierellomycota (25.05–48.36%) and Basidiomycota (17.32–27.08%), altogether making a total relative abundance of 89.07–96.63% (Fig. B and Table ). Compared with the control, 10% PG treatment significantly reduced the relative abundance of Ascomycota and Basidiomycota communities, but it significantly enhanced the relative abundance of Mortierellomycota . Soil microbial alpha diversity analysis Microbial alpha diversity refers to the abundance, diversity and evenness of microbial species in a specific area. Different alpha diversity indexes reflect different information of microorganisms. In this experiment, Chao1, Shannon and Pielou's indexes were selected to reflect the abundance, diversity and evenness of the microbial community. The experiment showed that the doses of PG treatment affected the abundance, diversity or evenness of soil bacterial and fungal communities (Table ). When 1% PG was applied (P3), the soil bacterial abundance, diversity and evenness decreased significantly by 11.57%, 3.25% and 2.33%, and the soil fungal diversity decreased significantly by 9.97%. When 10% PG was applied (P4), the soil bacterial abundance, diversity and evenness decreased significantly by 30.07%, 9.16% and 4.65%, and the soil fungal abundance, diversity, and evenness decreased significantly by 29.71%, 19.77%, and 14.93% respectively. Beta diversity analysis of soil microorganisms Principal coordinate analysis (PCoA) based on Bray Curtis distance showed that there were significant differences between treatment groups in both soil bacterial (Fig. A) and fungal (Fig. B) communities. In Fig. A, ANOSIM (r = 0.881; P = 0.001) and Adonis (R 2 = 0.666; P = 0.001) confirmed that the differences of bacterial communities between groups were larger than within-taxa. The distance between sample points showed that the bacterial communities of the control, P1 and P2 were clustered, while those of P3 and P4 were clustered respectively, suggesting 1–10% PG affected the bacterial community structure more significantly than the lower amount of PG (0.01–0.1%). In order to determine specific bacterial taxa enriched within the different treatments, LEfSe from the phylum to genus level was performed (Fig. ). For example, the phyla Actinobacteria and Chloroflexi in Control; the phyla Elusimicrobia and Acidobacteria in P1 treatment; the phylum of Patescibacteria in P3 treatment; the phyla of Bacteroidetes , Firmicutes , Proteobacteria and Verrucomicrobia in P4 treatment were all significantly enriched. The above information only includes microbial community analysis at the phylum level; Fig. shows the bacterial communities enriched significantly at different taxonomic levels for each treatment. In Fig. B, ANOSIM (r = 0.388; P = 0.006) and Adonis (R 2 = 0.397; P = 0.003) indicated that the differences of fungal communities were also higher between groups than within-group. Fungal communities of control, P1, P2 and P3 treatment taxa were clustered, while the cluster of P4 was isolated. This showed that the 10% PG led to a acute change in the fungal community structure. According to LEfSe analysis (Fig. ), the family Geminibasidiaceae in Control; the family Agaricaceae in P1 treatment; the families Diaporthaceae and Atheliaceae in P2 treatment; and the family of Sclerodermataceae in P3 treatment was enriched. No fungal taxa were significantly enriched at the family level in P4 treatment. The fungal community data onto each treatment of any clade is shown in Fig. . In all, PG doses significantly changed the community composition of bacteria and fungi in acidic soil. The differences were more pronounced when the highest PG dose was used. In addition, the bacterial community was more sensitive to the application of PG than the fungal community. Relationship between soil microbial community structure and environmental factors The RDA results showed that (Fig. A and Table ), the first and second ranking axes could explain 89.89% and 2.26% of the variation respectively in the bacterial composition. The soil bacterial composition was synergistically influenced by soil EC, Ca 2+ , pH (KCl), F − , NH 4+ -N, among which EC and Ca 2+ were the main driving factors. Figure B and Table showed that the first and second ranking axes explain 28.62% and 14.6% of the variation for the fungal composition. Moreover, for the soil fungal composition, the driven factors were F-, NH 4+ -N, pH (KCl), Ca 2+ and EC, in which F − and NH 4+ -N were the dominant factors.
With the increase of PG content, although the compositions of dominant bacterial taxa in soil at the phyla level are similar, the relative abundance of some phyla is different (Fig. A and Table ). The dominant bacterial phyla (> 5% relative abundance) in the control and PG-treated soil are Actinobacteria , Proteobacteria , Chloroflexi and Acidobacteria , with relative abundances of 25.6–34.6%, 22.2–42.9%, 7.1–16.8% and 11.1–14.5% respectively (Fig. A and Table ). Among the four dominant bacterial phyla, P4 (10% PG) reduced the abundance of Actinobacteria , Acidobacteria and Chloroflexi by up to 26.01%, 57.74% and 20.72% respectively and also increased the relative abundance of Proteobacteria by 93.24% compared to the control group. With the increase of PG content, although the compositions of dominant fungal phyla in soil are similar, the relative abundance of some fungal phyla is altered, and the P4 treatment has the greatest impact (Fig. B and Table ). The dominant fungal phyla (> 5% relative abundance) in PG-treated soil are Ascomycota (30.54–41.53%), Mortierellomycota (25.05–48.36%) and Basidiomycota (17.32–27.08%), altogether making a total relative abundance of 89.07–96.63% (Fig. B and Table ). Compared with the control, 10% PG treatment significantly reduced the relative abundance of Ascomycota and Basidiomycota communities, but it significantly enhanced the relative abundance of Mortierellomycota .
Microbial alpha diversity refers to the abundance, diversity and evenness of microbial species in a specific area. Different alpha diversity indexes reflect different information of microorganisms. In this experiment, Chao1, Shannon and Pielou's indexes were selected to reflect the abundance, diversity and evenness of the microbial community. The experiment showed that the doses of PG treatment affected the abundance, diversity or evenness of soil bacterial and fungal communities (Table ). When 1% PG was applied (P3), the soil bacterial abundance, diversity and evenness decreased significantly by 11.57%, 3.25% and 2.33%, and the soil fungal diversity decreased significantly by 9.97%. When 10% PG was applied (P4), the soil bacterial abundance, diversity and evenness decreased significantly by 30.07%, 9.16% and 4.65%, and the soil fungal abundance, diversity, and evenness decreased significantly by 29.71%, 19.77%, and 14.93% respectively.
Principal coordinate analysis (PCoA) based on Bray Curtis distance showed that there were significant differences between treatment groups in both soil bacterial (Fig. A) and fungal (Fig. B) communities. In Fig. A, ANOSIM (r = 0.881; P = 0.001) and Adonis (R 2 = 0.666; P = 0.001) confirmed that the differences of bacterial communities between groups were larger than within-taxa. The distance between sample points showed that the bacterial communities of the control, P1 and P2 were clustered, while those of P3 and P4 were clustered respectively, suggesting 1–10% PG affected the bacterial community structure more significantly than the lower amount of PG (0.01–0.1%). In order to determine specific bacterial taxa enriched within the different treatments, LEfSe from the phylum to genus level was performed (Fig. ). For example, the phyla Actinobacteria and Chloroflexi in Control; the phyla Elusimicrobia and Acidobacteria in P1 treatment; the phylum of Patescibacteria in P3 treatment; the phyla of Bacteroidetes , Firmicutes , Proteobacteria and Verrucomicrobia in P4 treatment were all significantly enriched. The above information only includes microbial community analysis at the phylum level; Fig. shows the bacterial communities enriched significantly at different taxonomic levels for each treatment. In Fig. B, ANOSIM (r = 0.388; P = 0.006) and Adonis (R 2 = 0.397; P = 0.003) indicated that the differences of fungal communities were also higher between groups than within-group. Fungal communities of control, P1, P2 and P3 treatment taxa were clustered, while the cluster of P4 was isolated. This showed that the 10% PG led to a acute change in the fungal community structure. According to LEfSe analysis (Fig. ), the family Geminibasidiaceae in Control; the family Agaricaceae in P1 treatment; the families Diaporthaceae and Atheliaceae in P2 treatment; and the family of Sclerodermataceae in P3 treatment was enriched. No fungal taxa were significantly enriched at the family level in P4 treatment. The fungal community data onto each treatment of any clade is shown in Fig. . In all, PG doses significantly changed the community composition of bacteria and fungi in acidic soil. The differences were more pronounced when the highest PG dose was used. In addition, the bacterial community was more sensitive to the application of PG than the fungal community.
The RDA results showed that (Fig. A and Table ), the first and second ranking axes could explain 89.89% and 2.26% of the variation respectively in the bacterial composition. The soil bacterial composition was synergistically influenced by soil EC, Ca 2+ , pH (KCl), F − , NH 4+ -N, among which EC and Ca 2+ were the main driving factors. Figure B and Table showed that the first and second ranking axes explain 28.62% and 14.6% of the variation for the fungal composition. Moreover, for the soil fungal composition, the driven factors were F-, NH 4+ -N, pH (KCl), Ca 2+ and EC, in which F − and NH 4+ -N were the dominant factors.
Effect of PG application on soil physical and chemical properties PG has a high environmental risk because it owns some harmful impurities such as soluble phosphorus, fluorine and heavy metals. Table shows that the contents of cadmium, arsenic, lead and chromium in PG are lower than those in Control, while the content of mercury is higher than that in Control, but lower than the risk screening value of China (pH ≤ 5.5, Hg concentration is 1.3 mg/kg). Therefore, the heavy metals in PG will not pose a threat to the soil environment. These findings agree with Pérez-López , who reported that PG did not contain large amount of heavy metals and that addition of PG did not lead to soil contamination. Kassir et al. monitored the effect of PG application on heavy metals in Mediterranean red soil and showing that the exchangeable and acid soluble contents of heavy metals in PG applied soil were higher than those in PG-untreated soil. Our study found that the application of 0.1% PG reduced the content of water-soluble fluorine, and the application of 1% or 10% PG significantly increased the content of water-soluble fluorine, indicating that the appropriate application of PG will not lead to soil fluorine pollution. Cui et al. also reported that the F − concentration on the leachate of PG-treated soils was lower than that in blank treatment. However, one possible explanation for the increase of water-soluble fluorine content could be that the PG carried F − into soil (Table ). Therefore, PG should be applied to the soil after harmless treatment. This experiment found that the application of PG can improve the effective nutrients of acidic soil, such as AP, NH 4 + -N and NO 3 – N. Crusciol et al. also found that PG can boost the available NO 3 – N in 0–5 cm tropical no-tillage soil layer . Kinjo & Pratt found that the content of NO 3 − in soil solution increased linearly with the increase of SO 4 2− due to the competitive effect of soil between SO 4 2− and NO 3 − adsorption . Such a competition effect can explain the positive effect of PG on NO 3 − . It was found that PG could not improve the soil active acidity (pH (H 2 O)), but could promote the soil potential acidity (pH (KCl)). Previous studies also reported that PG was not an effective material to improve soil acidity, but Al 3+ could react with SO 4 2− , F − and PO 4 3− in soil solution, resulting in decreased Al 3+ in soil solution and the increase of soil potential acidity . Effect of PG on soil respiration The experimental data showed that the application of PG can inhibit soil respiration, which was equivalent to inhibiting CO 2 emission. Such a result was consistent with the results of Wu , which showed that PG treatment could inhibit the emission of greenhouse gas in wheat soil and reduce the emission of soil CO 2 by 2.5%-6.6%. The research on using PG as a calcium source to store CO 2 showed that PG could increase the effective calcium in the system , and the effective calcium could fix the CO 2 produced by the organic matter decomposition and hence reduce CO 2 emission . Effect of PG on soil enzyme activity Soil enzyme activity is affected by soil physical and chemical properties, microorganisms, substrates and other factors . Although the effects of PG on the activity of some soil enzymes (invertase, amylase and cellulase) have been studied , the effects of PG on catalase, invertase, phosphatase and urease are not clear. This study found that the low addition of phosphogypsum (≤ 0.1% PG) improved soil urease activity, whereas excess PG would inhibit the activity of soil catalase, phosphatase and urease (except invertase). Sengupta and Dhal found that a mixture of 150 mg acid soil and 5.25 g PG reduced the soil microbial catabolic activity, but the soil microbial catabolic activity gradually recovered over time . Therefore, future study should examine the dynamic changes of soil enzyme activity. Soil urease activity first increased and then decreased, consistent with the changing trend of bacterial abundance and diversity (Table ), indicating that the soil bacterial community may affect the soil urease. The correlation analysis between soil enzyme activity and soil properties found that phosphatase was significantly positively correlated with pH (H 2 O), indicating that soil pH was the main factor affecting phosphatase activity. Unexpectedly, acid phosphatase activity was not significantly correlated with TP and AP, which might be mainly caused by the increase of TP and AP with the phosphate brought in by PG. Catalase, invertase and urease were significantly correlated with pH (KCl), SOM, Ca 2+ , SO 4 2− , F − . Previous studies have shown that the interaction between SO 4 2− / F − and Al 3+ in soil affects soil acidity (pH (H 2 O) and pH (KCl)) , and Ca 2+ had a strong impact on the storage and stability of soil organic matter (SOM) . Therefore, it is speculated that Ca 2+ , SO 4 2− and F − ions may be the main influencing factors of soil enzyme activity. The reason for the decrease of soil enzyme activity may be that Ca 2+ , SO 4 2− and F − ions restrain the growth of microorganisms and hence reduce their ability to secrete enzymes, resulting in the impeded activity of soil enzymes. When microorganisms are inhibited, the sudden increase of invertase activity may be due to the increase of a specific sucrose decomposing microorganism. Effect of PG on soil microbial community The dominant bacterial phyla in PG-treated soil are Actinobacteria , Proteobacteria , Chloroflexi and Acidobacteria . These results agree with Guo , who reported that the dominant bacteria were Proteobacteria , Chloroflexi , Actinobacteria and Actinobacteria after application of CaCO 3 (0, 2.25, 4.5, 7.5 t/hm 2 ) in acid soils in southern China. It was reported that Proteobacteria , Actinobacteria , Firmicutes and Bacteroidetes were the dominant bacteria in PG, and these strains generally had good adaptability to extreme environments , . In the present study, the changes of bacterial community composition can be considered as adaptations to the altered soil environment, as application of PG changed soil physicochemical properties (Table ). In addition, the relative abundances of Bacteroidetes , Firmicutes , Proteobacteria and Verrucomicrobia phyla were increased under 10% PG treatment (Fig. A) due to better salt tolerance of these strains , which may help to improve the salt tolerance of plants . The dominant fungal phylum in PG-treated soil is Ascomycota , Mortierellomycota and Basidiomycota . The bacterial community exhibited more significant changes than the fungal community (Figs. , ). This is because bacterial communities are more responsive than fungal communities . The doses of PG treatment in acid soil can significantly affect soil microbial structure. The results of study showed that the abundance, diversity and evenness of soil bacteria and fungi decreased significantly with the increase of PG (Table ). Redundancy analysis (RDA) was used to explore the relationship between bacterial and fungal communities and the environment. It was found that the soil EC and Ca 2+ were the main driving factors for the evolution of bacterial communities, and F − and NH 4 + -N were the primary driving factors for the fungal communities. Therefore, with increased PG, the microbial alpha diversity decreased due to the increase of soil EC, Ca 2+ and F − . The impaired alpha diversity further resulted in the soil microbial community being subjected to salt stress, calcium stress and fluorine toxicity. Corwin and Yemoto reported a significant correlation between soil EC and salinity: the higher the EC, the higher the salinity . Hence, the change in the microbial communities after the PG treatments may result from the salt stress (EC). In addition, studies have shown that a large amount of calcium entering the cells may cause cell apoptosis, necrosis or autophagy, leading to severe cell damage even death . As an important second messenger, overloaded calcium ions in cells over-activate a variety of enzyme systems, destroy cell membranes and produce a large number of free radicals such as reactive oxygen species and reactive nitrogen, attack the integrity of cell mitochondria and genome, cause DNA damage and induce apoptosis . Cristina et al. found that calcium ions inhibited microbial activity by destroying intercellular communication. Excessive use of PG resulted in the high concentration of water-soluble calcium ions in soil solution, which may have a cytotoxic effect and affect soil microbial diversity and abundance. Because the fluorine content of PG is higher than that of soil, F- content of the soil increases with the increase of PG application. However, a fluorine-containing environment is not conducive to the growth and reproduction of microorganisms. A low concentration of fluorine can inhibit microbial metabolism and growth, and high concentration can kill microorganisms . Therefore, the diversity and abundance of bacteria and fungi in soil decreased. Effects of PG on soil organic matter The composition of microbial communities plays a fundamental role on soil organic matter decomposition. Proteobacteria is considered crucial for the decomposition of lignocellulose and organic matter and Firmicutes play important roles in the decomposition of cellulose, hemicelluloses and lignin lignocellulose . The decrease in soil organic matter contents may be due to the increased relative abundance of Firmicutes and Proteobacteria promoting the decomposition of soil organic matter. At the same time, the increase in invertase activity may be due to the secretion of more invertase by these dominant strains since more dominant strains were changed in the P4 treatment. Shifts in microbial communities and enzymatic activity may alter soil organic matter turnover and accumulation. In the future, it is important to conduct a more comprehensive evaluation of how phosphogypsum affects microorganisms and enzymes that participate in the carbon cycle, such as amylase, β-xylosidase, β-glucosidase, cellulase, and laccase. The experiment found that soil organic matter decreased, and invertase activity increased, indicating that the application of phosphogypsum may cause soil organic matter loss.
PG has a high environmental risk because it owns some harmful impurities such as soluble phosphorus, fluorine and heavy metals. Table shows that the contents of cadmium, arsenic, lead and chromium in PG are lower than those in Control, while the content of mercury is higher than that in Control, but lower than the risk screening value of China (pH ≤ 5.5, Hg concentration is 1.3 mg/kg). Therefore, the heavy metals in PG will not pose a threat to the soil environment. These findings agree with Pérez-López , who reported that PG did not contain large amount of heavy metals and that addition of PG did not lead to soil contamination. Kassir et al. monitored the effect of PG application on heavy metals in Mediterranean red soil and showing that the exchangeable and acid soluble contents of heavy metals in PG applied soil were higher than those in PG-untreated soil. Our study found that the application of 0.1% PG reduced the content of water-soluble fluorine, and the application of 1% or 10% PG significantly increased the content of water-soluble fluorine, indicating that the appropriate application of PG will not lead to soil fluorine pollution. Cui et al. also reported that the F − concentration on the leachate of PG-treated soils was lower than that in blank treatment. However, one possible explanation for the increase of water-soluble fluorine content could be that the PG carried F − into soil (Table ). Therefore, PG should be applied to the soil after harmless treatment. This experiment found that the application of PG can improve the effective nutrients of acidic soil, such as AP, NH 4 + -N and NO 3 – N. Crusciol et al. also found that PG can boost the available NO 3 – N in 0–5 cm tropical no-tillage soil layer . Kinjo & Pratt found that the content of NO 3 − in soil solution increased linearly with the increase of SO 4 2− due to the competitive effect of soil between SO 4 2− and NO 3 − adsorption . Such a competition effect can explain the positive effect of PG on NO 3 − . It was found that PG could not improve the soil active acidity (pH (H 2 O)), but could promote the soil potential acidity (pH (KCl)). Previous studies also reported that PG was not an effective material to improve soil acidity, but Al 3+ could react with SO 4 2− , F − and PO 4 3− in soil solution, resulting in decreased Al 3+ in soil solution and the increase of soil potential acidity .
The experimental data showed that the application of PG can inhibit soil respiration, which was equivalent to inhibiting CO 2 emission. Such a result was consistent with the results of Wu , which showed that PG treatment could inhibit the emission of greenhouse gas in wheat soil and reduce the emission of soil CO 2 by 2.5%-6.6%. The research on using PG as a calcium source to store CO 2 showed that PG could increase the effective calcium in the system , and the effective calcium could fix the CO 2 produced by the organic matter decomposition and hence reduce CO 2 emission .
Soil enzyme activity is affected by soil physical and chemical properties, microorganisms, substrates and other factors . Although the effects of PG on the activity of some soil enzymes (invertase, amylase and cellulase) have been studied , the effects of PG on catalase, invertase, phosphatase and urease are not clear. This study found that the low addition of phosphogypsum (≤ 0.1% PG) improved soil urease activity, whereas excess PG would inhibit the activity of soil catalase, phosphatase and urease (except invertase). Sengupta and Dhal found that a mixture of 150 mg acid soil and 5.25 g PG reduced the soil microbial catabolic activity, but the soil microbial catabolic activity gradually recovered over time . Therefore, future study should examine the dynamic changes of soil enzyme activity. Soil urease activity first increased and then decreased, consistent with the changing trend of bacterial abundance and diversity (Table ), indicating that the soil bacterial community may affect the soil urease. The correlation analysis between soil enzyme activity and soil properties found that phosphatase was significantly positively correlated with pH (H 2 O), indicating that soil pH was the main factor affecting phosphatase activity. Unexpectedly, acid phosphatase activity was not significantly correlated with TP and AP, which might be mainly caused by the increase of TP and AP with the phosphate brought in by PG. Catalase, invertase and urease were significantly correlated with pH (KCl), SOM, Ca 2+ , SO 4 2− , F − . Previous studies have shown that the interaction between SO 4 2− / F − and Al 3+ in soil affects soil acidity (pH (H 2 O) and pH (KCl)) , and Ca 2+ had a strong impact on the storage and stability of soil organic matter (SOM) . Therefore, it is speculated that Ca 2+ , SO 4 2− and F − ions may be the main influencing factors of soil enzyme activity. The reason for the decrease of soil enzyme activity may be that Ca 2+ , SO 4 2− and F − ions restrain the growth of microorganisms and hence reduce their ability to secrete enzymes, resulting in the impeded activity of soil enzymes. When microorganisms are inhibited, the sudden increase of invertase activity may be due to the increase of a specific sucrose decomposing microorganism.
The dominant bacterial phyla in PG-treated soil are Actinobacteria , Proteobacteria , Chloroflexi and Acidobacteria . These results agree with Guo , who reported that the dominant bacteria were Proteobacteria , Chloroflexi , Actinobacteria and Actinobacteria after application of CaCO 3 (0, 2.25, 4.5, 7.5 t/hm 2 ) in acid soils in southern China. It was reported that Proteobacteria , Actinobacteria , Firmicutes and Bacteroidetes were the dominant bacteria in PG, and these strains generally had good adaptability to extreme environments , . In the present study, the changes of bacterial community composition can be considered as adaptations to the altered soil environment, as application of PG changed soil physicochemical properties (Table ). In addition, the relative abundances of Bacteroidetes , Firmicutes , Proteobacteria and Verrucomicrobia phyla were increased under 10% PG treatment (Fig. A) due to better salt tolerance of these strains , which may help to improve the salt tolerance of plants . The dominant fungal phylum in PG-treated soil is Ascomycota , Mortierellomycota and Basidiomycota . The bacterial community exhibited more significant changes than the fungal community (Figs. , ). This is because bacterial communities are more responsive than fungal communities . The doses of PG treatment in acid soil can significantly affect soil microbial structure. The results of study showed that the abundance, diversity and evenness of soil bacteria and fungi decreased significantly with the increase of PG (Table ). Redundancy analysis (RDA) was used to explore the relationship between bacterial and fungal communities and the environment. It was found that the soil EC and Ca 2+ were the main driving factors for the evolution of bacterial communities, and F − and NH 4 + -N were the primary driving factors for the fungal communities. Therefore, with increased PG, the microbial alpha diversity decreased due to the increase of soil EC, Ca 2+ and F − . The impaired alpha diversity further resulted in the soil microbial community being subjected to salt stress, calcium stress and fluorine toxicity. Corwin and Yemoto reported a significant correlation between soil EC and salinity: the higher the EC, the higher the salinity . Hence, the change in the microbial communities after the PG treatments may result from the salt stress (EC). In addition, studies have shown that a large amount of calcium entering the cells may cause cell apoptosis, necrosis or autophagy, leading to severe cell damage even death . As an important second messenger, overloaded calcium ions in cells over-activate a variety of enzyme systems, destroy cell membranes and produce a large number of free radicals such as reactive oxygen species and reactive nitrogen, attack the integrity of cell mitochondria and genome, cause DNA damage and induce apoptosis . Cristina et al. found that calcium ions inhibited microbial activity by destroying intercellular communication. Excessive use of PG resulted in the high concentration of water-soluble calcium ions in soil solution, which may have a cytotoxic effect and affect soil microbial diversity and abundance. Because the fluorine content of PG is higher than that of soil, F- content of the soil increases with the increase of PG application. However, a fluorine-containing environment is not conducive to the growth and reproduction of microorganisms. A low concentration of fluorine can inhibit microbial metabolism and growth, and high concentration can kill microorganisms . Therefore, the diversity and abundance of bacteria and fungi in soil decreased.
The composition of microbial communities plays a fundamental role on soil organic matter decomposition. Proteobacteria is considered crucial for the decomposition of lignocellulose and organic matter and Firmicutes play important roles in the decomposition of cellulose, hemicelluloses and lignin lignocellulose . The decrease in soil organic matter contents may be due to the increased relative abundance of Firmicutes and Proteobacteria promoting the decomposition of soil organic matter. At the same time, the increase in invertase activity may be due to the secretion of more invertase by these dominant strains since more dominant strains were changed in the P4 treatment. Shifts in microbial communities and enzymatic activity may alter soil organic matter turnover and accumulation. In the future, it is important to conduct a more comprehensive evaluation of how phosphogypsum affects microorganisms and enzymes that participate in the carbon cycle, such as amylase, β-xylosidase, β-glucosidase, cellulase, and laccase. The experiment found that soil organic matter decreased, and invertase activity increased, indicating that the application of phosphogypsum may cause soil organic matter loss.
This paper studied the effects of different PG application rates on soil respiration, soil enzyme activity and soil community, revealing the mechanism of PG on soil enzyme activity and microorganism. The soil respiration rate decreased with the increase of PG. 1% PG treatment had little effect on the soil physicochemical properties, and less than 20% on soil microbial indicators and enzyme activity. After 10% PG treatment, water-soluble fluoride increased 19.84 times, the activities of catalase, urease, and phosphatase decreased while invertase increased, but the abundance, diversity and evenness of soil bacteria and fungi reduced significantly. The dominant bacterial phyla are Actinobacteria , Proteobacteria , Chloroflexi and Acidobacteria , and the dominant fungal phyla are Ascomycota , Mortierellomycota and Basidiomycota . Redundancy analysis (RDA) showed that soil bacterial composition was mainly driven by electrical conductivity (EC) and Ca 2+ , while fungal composition was mainly driven by F − and NH 4 + . These results can help to revisit the current management of PG applications as soil amendments, suggesting that appropriate application of PG can improve soil properties, but the negative effects of PG above a certain threshold level should be considered. Therefore, we suggest that the dose of PG to improve acid soil is less than 1%.
Supplementary Information.
|
Linking plant functional genes to rhizosphere microbes: a review
|
5eb965c1-48ac-4d58-8a7d-742c9fc0acf2
|
10106864
|
Microbiology[mh]
|
Environment‐friendly production of crops is one of the challenges in agricultural systems to feed growing population (Yashveer et al ., ). Increasing evidences indicate that the soil rhizomicrobiome benefit plant growth and health and therefore play an important role in dealing with this challenge (Bai et al ., ; Goh et al ., ; Mueller and Sachs, ; Pieterse et al ., ; Raza et al ., ). The host‐associated microbiota inhabits inside various plant tissues and on root surface to access soil nutrients (Bai et al ., ). Microbial community structure is largely shaped by the host plant and external environment (Dastogeer et al ., ; Friesen et al ., ; Raza et al ., ). When plants are subjected to biotic or abiotic stresses, they can recruit beneficial microorganisms to help them resist these stresses by secreting a range of chemical factors, which is known as the ‘cry for help’ strategy (Bai et al ., ; Bakker et al ., ; Carrión et al ., ; Liu et al ., ; Liu and Brettell, ). To understand the comprehensive ‘cry for help’ strategy of plants, it is vital to unravel the molecular mechanisms of microbiome recruitment in the rhizosphere (Rolfe et al ., ; Zancarini et al ., ). The functions of the plant genes are crucial for understand the signalling cascades that control plant development and stress responses (Depuydt and Vandepoele, ). A large number of genes functions have been identified in the model plant Arabidopsis thaliana . Frequently, a single gene or several genes can largely regulate traits such as nutrient uptake, disease resistance, and resistance to abiotic stresses in plants (Liu et al ., ; Wei et al ., ; Zhao et al ., ). These genes were further validated in different crops such as wheat, rice, maize, soybean and sorghum (Li et al ., ; Liu et al ., ; Maron et al ., ; Wei et al ., ; Yokosho et al ., ). In recent years, plant functional genes have also been found to play important role in shaping the rhizomicrobiome (Cordovez et al ., ; Zhang et al ., ). The discoveries of plant‐microbe interaction at the molecular level provide a new direction for genetic breeding (Kroll et al ., ; Nerva et al ., ). Further research on microbes mutually interacting with the host genes is expected to cultivate new germplasm resources (Kroll et al ., ). Plant functional genes could regulate root phenotypic traits and the secretion of root exudates, such as organic acids and hormones (Kaushal et al ., ; Wang et al ., , ; Yu et al ., ), which have been found to drive microbial community assembly in rhizosphere (Zhalnina et al ., ). For example, the expression of plant organic acid channel protein genes can promote the production of organic acid, which could recruit beneficial rhizosphere microorganisms by forming stable metal chelate complexes and increasing soil pH (Zhang et al ., ). Rhizosphere microorganisms can also confer health advantages to plants by inducing the expression of plant functional genes (Berendsen et al ., ; Liu et al ., ). It has been proved that the expression of genes related to nutrient uptake and stress resistance is affected by structural and functional changes in the rhizosphere microbiota. These microbes can regulate the expression of plant functional genes by secreting secondary metabolites, producing volatile compounds and competing for nutrients, thereby directly or indirectly influencing plant growth and development (Hacquard et al ., ; Hou et al ., ; Kwak et al ., ; Netzker et al ., ; Yuan et al ., ). Thus, there exists a complex network of interactions between functional plant genes and rhizomicrobiome and both of them play key roles for plant survival and growth (Berendsen et al ., ; Depuydt and Vandepoele, ). However, whether plant genes or rhizobia have a greater influence on plant development is currently being debated, which necessitates particular plant‐microbe contribution algorithms. Exploring this relationship will help us to develop crop varieties with strong adaptability and resistance to various stresses. So far, such studies are only the tip of the iceberg due to the complex mechanism of microbe‐host gene interaction that involve multidisciplinary intersections (Rolfe et al ., ). Therefore, understanding how plant‐microbe communication is established requires more experimental exploration using cutting‐edge technologies. Unprecedented technologies facilitate the investigation on the link of plant‐specific genes to rhizosphere microorganisms (Fadiji and Babalola, ; Kumar et al ., ; Levy et al ., ; Liu et al ., ; Schaarschmidt et al ., ; Xu et al ., ). These technologies include plant‐related CRISPR‐Cas, transgenics and transgenic hairy roots and microbial‐related 16S rRNA sequencing, GeoChip, metagenomics and synthetic communities. Using the ‘top‐down’ and ‘bottom‐up’ designed methods and relevant techniques, great progress has been made in the mechanism of plant functional gene‐microbe interaction (Chi et al ., ; Ke et al ., ; Lawson et al ., ; Xu et al ., ). Here, we summarized how plant‐specific genes regulate rhizomicrobiome, and in turn how rhizomicrobiome influence the host genes, as well as the research methods used. Most microorganisms in nature are unculturable (Kaeberlein et al ., ). Yet minute and complicated experimental conditions and media combinations have to be considered in the culturable ones, greatly obstructing the development of microbiomics (van Teeseling and Jogler, ; Xing et al ., ). To overcome the shortcomings of pure culture, the microbiome sequencing technology has been developed in recent years (Liu et al ., ). It became clear that the microbiome could not be fully characterized at the genetic and transcriptional levels (Gao and Chu, ). Thus, the development of metaproteomics and metabolomics is particularly necessary for microbial functional studies (White et al ., ). The unprecedented development of microbiome technologies mentioned above has enabled an in‐depth analysis and understand the composition and function of host‐associated microbiota (Shao et al ., ).
Plant‐specific genes influence rhizomicrobiome based on molecular signalling, which is essentially a ‘top‐down’ process. Increasing number of studies examine how plant functional genes affect microbes at the molecular level, such as the structure of rhizomicrobiome selected by cultivars with the same high nutrient use efficiency and stress resistance (Chen et al ., ; Qiao et al ., ; Shi et al ., ). Although some genes explain only a small part of the total variance in the rhizomicrobiome, the microbiome is increasingly considered as an important predictor of plant phenotype (Bai et al ., ; Ravanbakhsh et al ., ; Zancarini et al ., ). In this section, we will specifically address rhizomicrobiome that are likely regulated by plant functional genes regarding nutrient uptake genes, disease and abiotic stress resistance genes. Nutrient uptake‐related genes affect rhizosphere microbes Plant growth and development heavily depend on the availability of nutrients that the root system can access to, indicating that plants have to face enormous challenges in extracting nutrients for cellular activities, and any lack of nutrients may decrease the productivity (Morgan and Connolly, ; Sukumar et al ., ). As a result, some plant species have ‘risen to the occasion’ and attempted to recruit soil microorganisms by regulating nutrient uptake genes to enhance defence capability against nutrient deprivation (Millet et al ., ; Teixeira et al ., ; Zhang et al ., ). The main mechanism of nutrient‐uptake‐related‐genes regulating rhizosphere microbes is its capability to increase root surface area, root hairs and lateral roots that are key factors to alter the rhizosphere microbial community (Figure ; Contesto et al ., ; Ditengou et al ., ; Felten et al ., ; Hirsch et al ., ; Yu et al ., ). For example, the maize ( Zea mays ) mutant rootless meristem 1 ( rum1 ) is defective in the initiation of embryonic seminal roots and postembryonic lateral roots in primary roots. Rum1 gene is an important checkpoint for auxin‐mediated initiation of lateral and seminal roots in maize, which may participate in the molecular network of root formation by regulating auxin transport in primary roots and auxin perception in primary root pericytes and influencing lateral root formation (Woll et al ., ). Recently, it has been shown that the rhizosphere bacterial diversity along the root development region of the maize mutant rum1 is significantly reduced compared to the wild type (Yu et al ., ). This is because the mutant rum1 lacks lateral roots, limiting water and nutrient acquisition during early developmental stages. This suggests that root development‐related genes can control the length and number of lateral roots by mediating a number of keys signalling substances, which in turn affect the microbial composition in the rhizosphere. A number of genes related to nutrient uptake and transport have been identified, such as ammonium nitrogen, nitrate nitrogen and phosphate root transporters (Wei et al ., ; Zhu et al ., ). For instance, several plasma membrane transporters involved in NO 3 − have been identified in Arabidopsis and other crops (Hu et al ., ; Wang et al ., ). NRT1.1B was found that largely explain the differences in nitrogen utilization efficiency between indica and japonica, which are the two main rice subspecies rice in Asian (Hu et al ., ). Moreover, NRT1.1 not only transports nitrate but also promotes uptake of the growth promotion hormone from the rhizosphere, which affects lateral root development (Krouk et al ., ; Léran et al ., ; Teng et al ., ). It has been found that wild‐type rice has more rhizosphere microbes involved in the nitrogen cycle compared to the NRT1.1B mutant. NRT1.1B is associated with the relative abundance of root bacteria that harbour key genes in the ammonification process, and these microbes may catalyse the formation of ammonium in the rhizosphere environment and thus affect the acquisition of nitrogen in plants (Zhang et al ., ). Another example is that Adenosine triphosphate binding cassette (ABC) transporter proteins include a large family have been shown to be involved in membrane transport of endogenous secondary metabolites in plants (Badri et al ., ; Yazaki, ). Some of these members in this family are important in the secretion of antifungal diterpenes and heavy metal detoxification (Brunetti et al ., ; Jasiński et al ., ). Studies have shown that Arabidopsis mutants with damaged ABC transporter abcg30 ( Atpdr2 ) increase and decrease the secretion of phenols and sugars, respectively, forming a specific microbial community capable of resisting or degrading phenolic compounds enriched in abcg30 plant secretions and thus reducing rhizobacterial diversity (Badri et al ., ; Cordovez et al ., ). Furthermore, karrikin (KAR)/KAR‐like Signals (KLs)/D14L‐ligand‐responsive genes were proved to promote the production of strigolactones and flavonoids, which selectively modify the composition of the rhizomicrobiome (Wang et al ., ). These studies suggest that some nutrient uptake and transport‐related genes can influence rhizomicrobiome composition by regulating root cell transporter protein activity, secreting root exudates (e.g., secondary metabolites, organic acids, hormones) and thus regulating plant nutrient utilization and altering root environmental conditions (e.g., soil pH, O 2 partial pressure, carbon source; Figure ; Kaushal et al ., ; Liu et al ., ; Wang et al ., , ; Wen et al ., ; Yu et al ., ). Moreover, some root secretions can act as signals to initiate rhizosphere chemical communication recognition processes, thereby influencing microbial‐based crop growth‐defence trade‐off strategies (Chen et al ., ; Lareen et al ., ; Vives‐Peris et al ., ). There exist some specific mechanisms in legumes by which nutrient uptake‐related genes (e.g., genes that control nodulation and thus increase nitrogen uptake) regulate rhizomicrobiome. This is because symbiotic nitrogen fixation by rhizobia is a mutually beneficial symbiotic process established between legumes and rhizobia through the activation of rhizobia‐induced signalling pathways and the expression of functional genes required for nodule primordium formation (Gao et al ., ; Yang et al ., ). Many important genes, including nodule initiation (NIN), nodule requirement (ERN1), Nod factor receptor 5, lotus root histidine kinase 1 and some micro(mi)RNAs, such as MtmiR169a‐MtNFYA1 , have been reported to be involved affect nodulation by regulating nodulation signalling pathways or mediating secreting of flavonoid and nitrate (affecting legume rhizobia infection; Combier et al ., ; Han et al ., ; Laloum et al ., ; Lorite et al ., ; Tsikou et al ., ). A recent study showed that overexpression of miR169c inhibited nodulation via targeting 3′‐UTR of GmNFYA‐C , while it promoted nodulation when miR169c lost its function (Xu et al ., ). In the rhizosphere of prospective host legumes, rhizobia have a close cooperative or competitive relationship with soil microorganisms (Han et al ., ; Lorite et al ., ). For instance, exogenous rhizobia can increase the relative abundance of potentially beneficial microorganisms, thus altering the microbial community structure and composition (White et al ., ; Xu et al ., ; Zgadzaj et al ., ; Zhong et al ., ). Moreover, Bacillus cereus group specifically promotes and suppresses the growth and colonization of Sinorhizobia and Bradyrhizobia , respectively (Han et al ., ). Therefore, legumes genes can also influence the establishment and modification of the rhizomicrobiome community by mediating rhizobial colonization and nodulation. Overall, these studies suggest that nutrient‐related genes can directly or indirectly influence the structure of rhizosphere microbial communities by altering root structure morphology, influencing plant nutrient use efficiency, regulating nodule colonization, and thus altering the rhizosphere microenvironment. These mechanisms can provide information for molecular breeding strategies to improve nutrient utilization and thus productivity in crops. Disease resistance genes affect rhizosphere microbes Plant growth in variable environment is threatened by various biotic stresses such as pathogen, and gradually domesticate the corresponding resistance mechanisms (Bakker et al ., ; Chen et al ., ; Li et al ., ; Liu et al ., , ; Song et al ., ). When plants are invaded by pathogens, disease resistance genes will be activated, which in turn trigger plant‐specific molecular immune recognition systems (Teixeira et al ., ). Previous studies have shown that the cell membrane receptor protein kinase FERONIA ( FER ) can regulate microbe‐associated molecular patterns (MAMP)‐induced reactive oxygen species (ROS) burst and basal ROS levels in roots through the small G protein (ROP2), which is a positive regulator of plasma membrane NADPH oxidase (Duan et al ., ; Stegmann et al ., ). After genetic analysis of different gene mutants, researchers found that ROP2‐mediated basal level ROS regulation was essential for growth regulation of Pseudomonas interrogans (Bergonci et al ., ; Haruta et al ., ; Li et al ., ; Wang et al ., ; Zhu et al ., ). Researchers found that the fer‐8 mutant reduced basal levels of ROS in the root system after pathogen invasion and lacked NADPH oxidase mutants showed elevated rhizosphere Pseudomonas (Song et al ., ), suggesting that plants may mediate the plant immune system through the RALF‐FER signalling pathway or affect the release of specific secretions that increase Pseudomonas populations to resist pathogen invasion (Figure ; Berendsen et al ., ; Liu et al ., ; Rudrappa et al ., ). As the most representative plant secondary metabolites, coumarins, benzoxazinoids and triterpenes play a pivotal role in improving plant disease resistance (Koprivova and Kopriva, ). Recently, it was found that two Multidrug and Toxic Compound Extrusion (MATE) transporter proteins ( CmMATE1 and ClMATE1 ) involved in the transport of their respective cucurbitacins (a triterpenoid unique to Cucurbitaceae). They further showed that the transport of cucurbitacin B from melon roots into the soil regulates the rhizosphere microbiome by selectively enriching two bacterial genera, Enterobacter and Bacillus , leading to strong resistance to the soil‐borne wilt fungus Fusarium oxysporum (Zhong et al ., ; Zhou et al ., ). Together, these studies demonstrate that plants' disease resistance genes can recruit beneficial microorganisms or alter microbial community structure by activating the plant immune system or regulating the synthesis of several key metabolite in plants. These research efforts pave the way for the use of the rhizosphere microbiome to improve resistance to soil‐borne diseases. Abiotic stress resistance genes affect rhizosphere microbes In addition to biotic stresses, plant‐microbial symbiotic organisms are also subjected to many abiotic stresses (Zhang et al ., ; Zhu, ), such as nutrient deficiency (i.e., nitrogen, iron and phosphorus) and high heavy metal (e.g., aluminium, cadmium and lead) stresses (Castrillo et al ., ; Fang et al ., ; Finkel et al ., ; Harbort et al ., ; López‐Arredondo et al ., ; Ma, ; von Uexküll and Mutert, ). The Arabidopsis root‐specific R2R3‐type MYB transcription factor MYB72 has become an important component of the induced systemic resistance (ISR) episode (Van der Ent et al ., ). In addition to its role in ISR, MYB72 is also induced in Arabidopsis roots under growth conditions of iron limitation and distorted iron uptake (Buckhout et al ., ; Colangelo and Guerinot, ; van de Mortel et al ., ). It was strongly demonstrated that the transcription factor MYB72 and MYB72‐controlled β‐glucosidase BGLU42 act key roles in regulating the beneficial rhizobacteria‐ISR and iron‐uptake responses, by regulating coumarin exudation to inhibit soil‐borne fungal pathogens and promote the growth of growth‐promoting and ISR‐inducing rhizobacteria (Stringlis et al ., ). Rhizomicrobiome therefore have become a ‘secret weapon’ for plants to seize scarce soil iron resources, providing new ideas to regulate soil iron mobilization and activation, and promoting the widespread application of the plant functional gene‐rhizosphere microorganism model in crop resistance to abiotic stresses (Stringlis et al ., ). The plant MATE family transports a wide range of substrates such as organic acids, phytohormones and secondary metabolites (Magalhaes et al ., ; Seo et al ., ). The functions of many MATE transporter proteins have been illustrated in plants (Takanashi et al ., ), including the transport of secondary metabolites such as alkaloids (Shoji et al ., ), disease resistance regulation (Nawrath et al ., ; Sun et al ., ), iron translocation (Durrett et al ., ; Yokosho et al ., ) and Al detoxification (Wu et al ., ). MATE transporter proteins are also present and involved in Aluminium (Al) resistance and tolerance in crops such as rice, maize, soybean and sorghum (Liu et al ., ; Maron et al ., ; Yokosho et al ., ). When soybean was exposed to high Al stress, the expression of GmMATE58 and GmMATE1 genes increased, promoting the secretion of substances such as malic acid, oxalic acid and phenolic compounds (Chen and Liao, ; Li et al ., ; Liu et al ., ; Zhou et al ., ), which can recruit beneficial microorganisms to enhance soybean to resist Al toxicity. In particular, the recruited microorganisms, such as Burkholderia , can improve the solubility of phosphorus in the soil and undergo denitrification, thus improving soybean tolerance to Al toxicity (Lian et al ., ). Noteworthy, not only the normal or overexpression of genes but also the loss of plant functional gene can affect the host resistance to various stresses. Our recent research found that the rice could influence rhizosphere microorganisms by changing plant metabolites, such as salicin, arbutin, glycolic acid phosphate, after loss of the function of sst (seedling salt tolerant) gene and then assist host to resist salt stress (Lian et al ., ). These studies illustrated that, like nutrient‐related genes, abiotic stress resistance genes can regulate rhizobia by inducing systemic stress resistance and regulating specific metabolites. The expression of host‐specific genes also has a regulatory effect on soil enzyme activities (Figure ; Chen et al ., ; Fließbach et al ., ). This mainly attributes to that soil enzymes are mainly derived from exudates of plant root (Guan et al ., ) and metabolites of microorganisms (Zimmermann and Frey, ). The amount and functions of microorganisms were closely related to the activities of soil enzymes (Durán et al ., ; Velmourougane and Blaise, ), including urease, sucrase and cellulase. The transgenic AFPCHI disease‐resistant sugar beet was found to have increased urease, dehydrogenase, protease, catalase, pronase, acid and alkaline phosphatase activities through greenhouse trials (Bezirganoglu and Uysal, ). However, it has been reported that tobacco planted with trans‐antimicrobial protein gene and trans‐null plasmid gene had some inhibitory effects on peroxidase and urease activities in purple soil at specific periods (Wang et al ., ). It remains to be further verified that plant disease resistance genes may affect the assembly of rhizosphere microorganisms by regulating the activity of soil enzymes. Mechanisms of genes regulating the rhizosphere microbes According to the description above, plant functional genes affect rhizomicrobiome mainly by regulating root structure morphology, plant nutrient use efficiency, rhizobial colonization, secondary metabolites and hormones, and activating the plant immune system, which in turn affect the rhizosphere microenvironment or directly signal to microorganisms (Egamberdieva et al ., ; Eichmann et al ., ; López‐Ráez et al ., ). However, these mechanisms often act interactively in plants. For example, phytohormones can influence root structural morphology, plant‐dependent defence processes and root exudate secretion (Eichmann et al ., ; Fu et al ., ). Growth hormone regulates Arabidopsis root development mainly through the growth hormone synthesis pathway and the polar transport carrier pathway. Deletion of the growth hormone synthesis genes rty (rooty) and sur (super root) can lead to excessive endogenous IAA synthesis, resulting in a high number of lateral roots (Boerjan et al ., ). Ethylene promotes Arabidopsis root hair growth by regulating the activity of EIN3/EIL1 and RHD6/RSL1 transcriptional complexes (Feng et al ., ). Moreover, plant immune system can be divided into two layers, and hormonal signals are essential for both layers (Aerts et al ., ). In the first layer, plants are damaged, recognize microorganisms/pathogens and release small molecule damage‐associated molecular patterns that trigger immune signals leading to pattern‐triggered immunity (PTI; Dangl et al ., ; Erb and Reymond, ). Gene expression in PTI immunity is almost always influenced by interactions between sectors (Hillmer et al ., ). In the second layer, pathogens secrete variable effectors that hinder PTI by inhibiting defence hormones, and resistant plants recognize the effectors, triggering effector immunity (ETI; Han and Kahmann, ). In ETI, all divisions can (partially) take over the response if one of them is inactive (Tsuda et al ., ). However, in the actual defence process, there is a complex crosstalk between molecular pathways of different hormones and this crosstalk is critical and complex for adjusting plant growth and development and thus affecting microorganisms (Aerts et al ., ). Furthermore, hormones can mediate the secretion of root exudates. It has been shown that disruption of ET signalling pathways leads to differences in the composition of root exudates, including smaller amounts of esculetin, gallic acid, L‐fucose, eicosapentaenoic acid, and higher amounts of β‐aldehyde, and that these root exudate metabolites can affect the assembly and function of bacterial taxa (Fu et al ., ).
Plant growth and development heavily depend on the availability of nutrients that the root system can access to, indicating that plants have to face enormous challenges in extracting nutrients for cellular activities, and any lack of nutrients may decrease the productivity (Morgan and Connolly, ; Sukumar et al ., ). As a result, some plant species have ‘risen to the occasion’ and attempted to recruit soil microorganisms by regulating nutrient uptake genes to enhance defence capability against nutrient deprivation (Millet et al ., ; Teixeira et al ., ; Zhang et al ., ). The main mechanism of nutrient‐uptake‐related‐genes regulating rhizosphere microbes is its capability to increase root surface area, root hairs and lateral roots that are key factors to alter the rhizosphere microbial community (Figure ; Contesto et al ., ; Ditengou et al ., ; Felten et al ., ; Hirsch et al ., ; Yu et al ., ). For example, the maize ( Zea mays ) mutant rootless meristem 1 ( rum1 ) is defective in the initiation of embryonic seminal roots and postembryonic lateral roots in primary roots. Rum1 gene is an important checkpoint for auxin‐mediated initiation of lateral and seminal roots in maize, which may participate in the molecular network of root formation by regulating auxin transport in primary roots and auxin perception in primary root pericytes and influencing lateral root formation (Woll et al ., ). Recently, it has been shown that the rhizosphere bacterial diversity along the root development region of the maize mutant rum1 is significantly reduced compared to the wild type (Yu et al ., ). This is because the mutant rum1 lacks lateral roots, limiting water and nutrient acquisition during early developmental stages. This suggests that root development‐related genes can control the length and number of lateral roots by mediating a number of keys signalling substances, which in turn affect the microbial composition in the rhizosphere. A number of genes related to nutrient uptake and transport have been identified, such as ammonium nitrogen, nitrate nitrogen and phosphate root transporters (Wei et al ., ; Zhu et al ., ). For instance, several plasma membrane transporters involved in NO 3 − have been identified in Arabidopsis and other crops (Hu et al ., ; Wang et al ., ). NRT1.1B was found that largely explain the differences in nitrogen utilization efficiency between indica and japonica, which are the two main rice subspecies rice in Asian (Hu et al ., ). Moreover, NRT1.1 not only transports nitrate but also promotes uptake of the growth promotion hormone from the rhizosphere, which affects lateral root development (Krouk et al ., ; Léran et al ., ; Teng et al ., ). It has been found that wild‐type rice has more rhizosphere microbes involved in the nitrogen cycle compared to the NRT1.1B mutant. NRT1.1B is associated with the relative abundance of root bacteria that harbour key genes in the ammonification process, and these microbes may catalyse the formation of ammonium in the rhizosphere environment and thus affect the acquisition of nitrogen in plants (Zhang et al ., ). Another example is that Adenosine triphosphate binding cassette (ABC) transporter proteins include a large family have been shown to be involved in membrane transport of endogenous secondary metabolites in plants (Badri et al ., ; Yazaki, ). Some of these members in this family are important in the secretion of antifungal diterpenes and heavy metal detoxification (Brunetti et al ., ; Jasiński et al ., ). Studies have shown that Arabidopsis mutants with damaged ABC transporter abcg30 ( Atpdr2 ) increase and decrease the secretion of phenols and sugars, respectively, forming a specific microbial community capable of resisting or degrading phenolic compounds enriched in abcg30 plant secretions and thus reducing rhizobacterial diversity (Badri et al ., ; Cordovez et al ., ). Furthermore, karrikin (KAR)/KAR‐like Signals (KLs)/D14L‐ligand‐responsive genes were proved to promote the production of strigolactones and flavonoids, which selectively modify the composition of the rhizomicrobiome (Wang et al ., ). These studies suggest that some nutrient uptake and transport‐related genes can influence rhizomicrobiome composition by regulating root cell transporter protein activity, secreting root exudates (e.g., secondary metabolites, organic acids, hormones) and thus regulating plant nutrient utilization and altering root environmental conditions (e.g., soil pH, O 2 partial pressure, carbon source; Figure ; Kaushal et al ., ; Liu et al ., ; Wang et al ., , ; Wen et al ., ; Yu et al ., ). Moreover, some root secretions can act as signals to initiate rhizosphere chemical communication recognition processes, thereby influencing microbial‐based crop growth‐defence trade‐off strategies (Chen et al ., ; Lareen et al ., ; Vives‐Peris et al ., ). There exist some specific mechanisms in legumes by which nutrient uptake‐related genes (e.g., genes that control nodulation and thus increase nitrogen uptake) regulate rhizomicrobiome. This is because symbiotic nitrogen fixation by rhizobia is a mutually beneficial symbiotic process established between legumes and rhizobia through the activation of rhizobia‐induced signalling pathways and the expression of functional genes required for nodule primordium formation (Gao et al ., ; Yang et al ., ). Many important genes, including nodule initiation (NIN), nodule requirement (ERN1), Nod factor receptor 5, lotus root histidine kinase 1 and some micro(mi)RNAs, such as MtmiR169a‐MtNFYA1 , have been reported to be involved affect nodulation by regulating nodulation signalling pathways or mediating secreting of flavonoid and nitrate (affecting legume rhizobia infection; Combier et al ., ; Han et al ., ; Laloum et al ., ; Lorite et al ., ; Tsikou et al ., ). A recent study showed that overexpression of miR169c inhibited nodulation via targeting 3′‐UTR of GmNFYA‐C , while it promoted nodulation when miR169c lost its function (Xu et al ., ). In the rhizosphere of prospective host legumes, rhizobia have a close cooperative or competitive relationship with soil microorganisms (Han et al ., ; Lorite et al ., ). For instance, exogenous rhizobia can increase the relative abundance of potentially beneficial microorganisms, thus altering the microbial community structure and composition (White et al ., ; Xu et al ., ; Zgadzaj et al ., ; Zhong et al ., ). Moreover, Bacillus cereus group specifically promotes and suppresses the growth and colonization of Sinorhizobia and Bradyrhizobia , respectively (Han et al ., ). Therefore, legumes genes can also influence the establishment and modification of the rhizomicrobiome community by mediating rhizobial colonization and nodulation. Overall, these studies suggest that nutrient‐related genes can directly or indirectly influence the structure of rhizosphere microbial communities by altering root structure morphology, influencing plant nutrient use efficiency, regulating nodule colonization, and thus altering the rhizosphere microenvironment. These mechanisms can provide information for molecular breeding strategies to improve nutrient utilization and thus productivity in crops.
Plant growth in variable environment is threatened by various biotic stresses such as pathogen, and gradually domesticate the corresponding resistance mechanisms (Bakker et al ., ; Chen et al ., ; Li et al ., ; Liu et al ., , ; Song et al ., ). When plants are invaded by pathogens, disease resistance genes will be activated, which in turn trigger plant‐specific molecular immune recognition systems (Teixeira et al ., ). Previous studies have shown that the cell membrane receptor protein kinase FERONIA ( FER ) can regulate microbe‐associated molecular patterns (MAMP)‐induced reactive oxygen species (ROS) burst and basal ROS levels in roots through the small G protein (ROP2), which is a positive regulator of plasma membrane NADPH oxidase (Duan et al ., ; Stegmann et al ., ). After genetic analysis of different gene mutants, researchers found that ROP2‐mediated basal level ROS regulation was essential for growth regulation of Pseudomonas interrogans (Bergonci et al ., ; Haruta et al ., ; Li et al ., ; Wang et al ., ; Zhu et al ., ). Researchers found that the fer‐8 mutant reduced basal levels of ROS in the root system after pathogen invasion and lacked NADPH oxidase mutants showed elevated rhizosphere Pseudomonas (Song et al ., ), suggesting that plants may mediate the plant immune system through the RALF‐FER signalling pathway or affect the release of specific secretions that increase Pseudomonas populations to resist pathogen invasion (Figure ; Berendsen et al ., ; Liu et al ., ; Rudrappa et al ., ). As the most representative plant secondary metabolites, coumarins, benzoxazinoids and triterpenes play a pivotal role in improving plant disease resistance (Koprivova and Kopriva, ). Recently, it was found that two Multidrug and Toxic Compound Extrusion (MATE) transporter proteins ( CmMATE1 and ClMATE1 ) involved in the transport of their respective cucurbitacins (a triterpenoid unique to Cucurbitaceae). They further showed that the transport of cucurbitacin B from melon roots into the soil regulates the rhizosphere microbiome by selectively enriching two bacterial genera, Enterobacter and Bacillus , leading to strong resistance to the soil‐borne wilt fungus Fusarium oxysporum (Zhong et al ., ; Zhou et al ., ). Together, these studies demonstrate that plants' disease resistance genes can recruit beneficial microorganisms or alter microbial community structure by activating the plant immune system or regulating the synthesis of several key metabolite in plants. These research efforts pave the way for the use of the rhizosphere microbiome to improve resistance to soil‐borne diseases.
In addition to biotic stresses, plant‐microbial symbiotic organisms are also subjected to many abiotic stresses (Zhang et al ., ; Zhu, ), such as nutrient deficiency (i.e., nitrogen, iron and phosphorus) and high heavy metal (e.g., aluminium, cadmium and lead) stresses (Castrillo et al ., ; Fang et al ., ; Finkel et al ., ; Harbort et al ., ; López‐Arredondo et al ., ; Ma, ; von Uexküll and Mutert, ). The Arabidopsis root‐specific R2R3‐type MYB transcription factor MYB72 has become an important component of the induced systemic resistance (ISR) episode (Van der Ent et al ., ). In addition to its role in ISR, MYB72 is also induced in Arabidopsis roots under growth conditions of iron limitation and distorted iron uptake (Buckhout et al ., ; Colangelo and Guerinot, ; van de Mortel et al ., ). It was strongly demonstrated that the transcription factor MYB72 and MYB72‐controlled β‐glucosidase BGLU42 act key roles in regulating the beneficial rhizobacteria‐ISR and iron‐uptake responses, by regulating coumarin exudation to inhibit soil‐borne fungal pathogens and promote the growth of growth‐promoting and ISR‐inducing rhizobacteria (Stringlis et al ., ). Rhizomicrobiome therefore have become a ‘secret weapon’ for plants to seize scarce soil iron resources, providing new ideas to regulate soil iron mobilization and activation, and promoting the widespread application of the plant functional gene‐rhizosphere microorganism model in crop resistance to abiotic stresses (Stringlis et al ., ). The plant MATE family transports a wide range of substrates such as organic acids, phytohormones and secondary metabolites (Magalhaes et al ., ; Seo et al ., ). The functions of many MATE transporter proteins have been illustrated in plants (Takanashi et al ., ), including the transport of secondary metabolites such as alkaloids (Shoji et al ., ), disease resistance regulation (Nawrath et al ., ; Sun et al ., ), iron translocation (Durrett et al ., ; Yokosho et al ., ) and Al detoxification (Wu et al ., ). MATE transporter proteins are also present and involved in Aluminium (Al) resistance and tolerance in crops such as rice, maize, soybean and sorghum (Liu et al ., ; Maron et al ., ; Yokosho et al ., ). When soybean was exposed to high Al stress, the expression of GmMATE58 and GmMATE1 genes increased, promoting the secretion of substances such as malic acid, oxalic acid and phenolic compounds (Chen and Liao, ; Li et al ., ; Liu et al ., ; Zhou et al ., ), which can recruit beneficial microorganisms to enhance soybean to resist Al toxicity. In particular, the recruited microorganisms, such as Burkholderia , can improve the solubility of phosphorus in the soil and undergo denitrification, thus improving soybean tolerance to Al toxicity (Lian et al ., ). Noteworthy, not only the normal or overexpression of genes but also the loss of plant functional gene can affect the host resistance to various stresses. Our recent research found that the rice could influence rhizosphere microorganisms by changing plant metabolites, such as salicin, arbutin, glycolic acid phosphate, after loss of the function of sst (seedling salt tolerant) gene and then assist host to resist salt stress (Lian et al ., ). These studies illustrated that, like nutrient‐related genes, abiotic stress resistance genes can regulate rhizobia by inducing systemic stress resistance and regulating specific metabolites. The expression of host‐specific genes also has a regulatory effect on soil enzyme activities (Figure ; Chen et al ., ; Fließbach et al ., ). This mainly attributes to that soil enzymes are mainly derived from exudates of plant root (Guan et al ., ) and metabolites of microorganisms (Zimmermann and Frey, ). The amount and functions of microorganisms were closely related to the activities of soil enzymes (Durán et al ., ; Velmourougane and Blaise, ), including urease, sucrase and cellulase. The transgenic AFPCHI disease‐resistant sugar beet was found to have increased urease, dehydrogenase, protease, catalase, pronase, acid and alkaline phosphatase activities through greenhouse trials (Bezirganoglu and Uysal, ). However, it has been reported that tobacco planted with trans‐antimicrobial protein gene and trans‐null plasmid gene had some inhibitory effects on peroxidase and urease activities in purple soil at specific periods (Wang et al ., ). It remains to be further verified that plant disease resistance genes may affect the assembly of rhizosphere microorganisms by regulating the activity of soil enzymes.
According to the description above, plant functional genes affect rhizomicrobiome mainly by regulating root structure morphology, plant nutrient use efficiency, rhizobial colonization, secondary metabolites and hormones, and activating the plant immune system, which in turn affect the rhizosphere microenvironment or directly signal to microorganisms (Egamberdieva et al ., ; Eichmann et al ., ; López‐Ráez et al ., ). However, these mechanisms often act interactively in plants. For example, phytohormones can influence root structural morphology, plant‐dependent defence processes and root exudate secretion (Eichmann et al ., ; Fu et al ., ). Growth hormone regulates Arabidopsis root development mainly through the growth hormone synthesis pathway and the polar transport carrier pathway. Deletion of the growth hormone synthesis genes rty (rooty) and sur (super root) can lead to excessive endogenous IAA synthesis, resulting in a high number of lateral roots (Boerjan et al ., ). Ethylene promotes Arabidopsis root hair growth by regulating the activity of EIN3/EIL1 and RHD6/RSL1 transcriptional complexes (Feng et al ., ). Moreover, plant immune system can be divided into two layers, and hormonal signals are essential for both layers (Aerts et al ., ). In the first layer, plants are damaged, recognize microorganisms/pathogens and release small molecule damage‐associated molecular patterns that trigger immune signals leading to pattern‐triggered immunity (PTI; Dangl et al ., ; Erb and Reymond, ). Gene expression in PTI immunity is almost always influenced by interactions between sectors (Hillmer et al ., ). In the second layer, pathogens secrete variable effectors that hinder PTI by inhibiting defence hormones, and resistant plants recognize the effectors, triggering effector immunity (ETI; Han and Kahmann, ). In ETI, all divisions can (partially) take over the response if one of them is inactive (Tsuda et al ., ). However, in the actual defence process, there is a complex crosstalk between molecular pathways of different hormones and this crosstalk is critical and complex for adjusting plant growth and development and thus affecting microorganisms (Aerts et al ., ). Furthermore, hormones can mediate the secretion of root exudates. It has been shown that disruption of ET signalling pathways leads to differences in the composition of root exudates, including smaller amounts of esculetin, gallic acid, L‐fucose, eicosapentaenoic acid, and higher amounts of β‐aldehyde, and that these root exudate metabolites can affect the assembly and function of bacterial taxa (Fu et al ., ).
Rhizosphere microorganisms can decompose soil organic matter into inorganic nutrients for plants and their physiological metabolic activities can also improve soil quality (Bhatti et al ., ; Li et al ., ; Li and Gong, ; Mishra et al ., ; Wang et al ., ). Rhizosphere microorganisms increase nutrient availability to plants, promote plant development and achieve the microbial ecological service function in farmland ecosystems. However, obtaining genetic varieties with high nutrient utilization and cross‐stress resistance is the fundamental way to improve the yield and quality of farmland crops (Ali et al ., ; Anwar and Kim, ). Therefore, uncovering the mechanisms of ‘bottom‐up’ regulation of plant functional genes by rhizosphere microbes is of great importance for agricultural production. In this section, the microbial effects on the expression of functional genes related to plant growth promotion, flowering, immune regulation and stress tolerance were reviewed. Effects of rhizosphere microbes on plant growth and development genes For decades, researches on beneficial plant‐microbe interactions have formed a strong molecular framework (Lugtenberg and Kamilova, ; Saleem et al ., ). Microbes can up‐ or down‐regulate the genes related to plant nutrient absorption by immobilizing nitrogen and to secondary metabolites, thus promoting or inhibiting plant growth (Figure ). The main microorganisms associated with nitrogen fixation in the rhizosphere of specific plants include Klebsiella , Paenibacillus and Azospirillum (Grady et al ., ; Mehnaz et al ., ; Ryu et al ., ). Wang et al . ( ) pointed out that specific relationships exist between given host genetics and associated microbes. This is further supported by the results from synthetic communities (SynComs) application that can systemically regulate the transcription of genes involved in multiple facets of growth and nutrient metabolism, especially auxin responses and nutrient signalling pathways. The expression of nitrate transporter genes and nitrate reductase genes were down‐regulated in plants treated with SynComs, which improved the biological fixation of N 2 , thereby inhibiting the direct absorption of nitrogen by roots and related metabolic pathways (Wang et al ., ). Many phosphorus starvation ( PSR ) genes, phosphate transport and metabolism genes are also activated by SynComs (Wang et al ., ; Wu et al ., ). This indicates that microorganisms not only contribute to phosphorus release from insoluble forms but also activate the PSR signalling system, thereby enhancing the absorption of environmental phosphorus and promoting inter‐tissue phosphorus recycling (Desbrosses and Stougaard, ; Li et al ., ; Wu et al ., ; Zhong et al ., ). Additionally, GO analysis showed that auxin responsive genes were abundant among differentially expressed genes affected by SynComs application (Hinsinger et al ., ; Wang et al ., ). Recruited rhizosphere microbes can produce phytohormones to regulate plant flowering signalling pathways (Rodriguez et al ., ). A current focus is to decipher the link between rhizosphere microorganisms and plant functional genes on flowering time (Figure ). For one thing, microorganisms can produce IAA from tryptophan (Trp; Duca et al ., ; Molina et al ., ; Patten et al ., ; Treesubsuntorn et al ., ). One of the abundant rhizosphere microorganisms, Arthrobacter , has been reported to have the ability to produce IAA, which is beneficial for plant growth (Li et al ., ). IAA was the direct driver that down‐regulated the expression of genes involved in flowering, which delays the flowering time (Lu et al ., ; Mai et al ., ). It was also found that the selectively enriched rhizosphere microorganism regulated the flowering time by affecting the available nitrogen content in autoclaved potting‐mix soils (Ishioka et al ., ; Panke‐Buisse et al ., ). Notably, microbes also synthesize and emit many volatile organic compounds (VOCs) to perturb host flowering time (Hung et al ., ; Sánchez‐López et al ., ). VOCs are low molecular weight (<300 Dalton) molecules that are easily dispersed by air and water due to high vapour pressure and low boiling point (Bitas et al ., ; Schmidt et al ., ; Schulz and Dickschat, ). The appearance of floral buds in VOCs treated ahk2/4 and ahk3/4 plants occurred 3–4 days before non‐treated Arabidopsis . In contrast, VOCs did not exert any significant effect on the time of floral bud appearance in both ahk2/3 and 35S:CKX1 plants ( ahk2/4 , ahk3/4 and ahk2/3 are CK signalling mutants; 35S:CKX1 plants are CK oxidase/dehydrogenase1 over‐expressing plants). These findings provide strong evidence that VOCs‐promoted early flowering involves suppression of NO action through the scavenging of NO molecules by CKs (Riefler et al ., ; Sánchez‐López et al ., ; Werner et al ., ). Together, microbes synthesize a multitude of nutrients and hormones, that affect the expression of plant flowering‐related genes, and may directly or indirectly regulate plant flowering time (De‐la‐Peña and Loyola‐Vargas, ). VOCs released from rhizomicrobiome can also improve multiple functions in ecosystems, such as plant growth and development (Gutiérrez‐Luna et al ., ; Kanchiswamy et al ., ; Ortíz‐Castro et al ., ; Schulz‐Bohm et al ., ). A variety of bacteria or fungi have been identified to produce VOCs, such as Bacillus , Pseudomonas and Serratia spp. (Hassani et al ., ; Plyuta et al ., ; Xie et al ., ). Sun et al . ( ) found that treatment with F.luteovirens VOCs reduces primary root growth by aggravating auxin accumulation through the repression of the abundance of auxin efflux carrier PIN‐FORMED 2 ( PIN2 ) protein, whereas it increases the lateral root number of A. thaliana seedlings. In addition to modulating root system architecture, treatment with F. luteovirens VOCs markedly increased aboveground growth. The transcriptomic and metabolomic analyses further supported the idea that F. luteovirens VOCs regulate plant growth and development by inducing up‐ or down‐regulation of genes related to carbon/nitrogen metabolism and antioxidant defence (Sun et al ., ). Overall, these findings suggest that rhizosphere microbes can regulate plant growth‐related gene expression by mobilizing nutrients, altering plant nutrient use efficiency and producing hormones such as IAA and volatile compounds. Effects of rhizosphere microorganisms on plant resistance genes As sessile organisms, plants have to cope with various biotic and abiotic stress for long‐term domestication (Zhu, ). In this process, corresponding resistance genes and mechanisms have evolved continuously to resist adverse environmental conditions (Chong et al ., ; Frantzeskakis et al ., ). Recently, a paradigm shift in the life sciences has emerged, in which microbial communities are viewed as core drivers of tolerance mechanisms (Cordovez et al ., ). Beneficial microbes were recruited to build defence signalling pathways (Figure ), which are based on the interaction of plants, pathogenic bacteria and rhizomicrobiome in response to biotic stress, that is likely a survival strategy conserved across the plant kingdom (Liu et al ., ; Liu and Brettell, ). Durum wheat infected by the fungal pathogen Fusarium pseudograminearum ( Fp ) leads to an enrichment of the beneficial bacterium Stenotrophomonas rhizophila (SR80) in the rhizosphere. As an early warning factor, the assembled SR80 was able to promote the large‐scale expression of pathogenesis‐related (PR) genes involved in the salicylic acid and jasmonic acid signalling pathways, enhancing host immunity against crown rot disease (Liu et al ., , ). Another case is that VOCs produced by Fusarium culmorum stimulated the production of flavonoid sodorifen VOCs of bacterium Serratia plymuthica , which induced the expression of associated defence genes in Arabidopsis (Raza et al ., ; Schmidt et al ., ). Under abiotic stresses, the effect of rhizomicrobiome on plant genes might be more comprehensible. As for low phosphorus availability, the phosphorous starvation response induced by microbial invasion can promote the expression of master transcriptional regulator PhR1 to alter orthophosphate (Pi) metabolism in plants. PHR1 directly activates microbiome‐enhanced response to phosphate limitation while repressing microbially driven plant immune system. It suggests that microbes have changed the transcription level of defence genes to enhance the immunity of plants (Castrillo et al ., ). Furthermore, rhizomicrobiome may change more defensive pathways to mitigate abiotic stress. Inoculation of Arbuscular Mycorrhizal Fungi (AMF) under drought stress condition has been found to be able to induce the expression of 1‐pyrrolin‐5‐carboxylic acid synthase (P5CS) gene. P5CS is a key enzyme involved in the proline synthesis, which can promote cell water retention, thus improving the ability of plants to resist osmotic stress (Hu et al ., ; Ruiz‐Lozano et al ., ). At the same time, the AMF also enhance plant resistance to drought stress by regulating 9‐cis‐epoxycarotenoid dioxygenase ( NCED ) gene expression. NCED is an important enzyme in controlling abscisic acid (ABA) metabolism which catalyses the oxidative cleavage of epoxy carotenoids into xanthoxins (Chauffour et al ., ; Taylor et al ., ). As a whole, the evidence above suggests that microbes can regulate the expression of plant genes through multiple pathways, such as secreting hormones, producing VOCs, enhancing the plant immune system, and building defence signalling pathways, to help the host adapt to various stresses. These studies shed light on the important role of plant‐associated rhizosphere microbiota for plant functions and broaden multiple ideas for manipulating host growth through microbial intervention.
For decades, researches on beneficial plant‐microbe interactions have formed a strong molecular framework (Lugtenberg and Kamilova, ; Saleem et al ., ). Microbes can up‐ or down‐regulate the genes related to plant nutrient absorption by immobilizing nitrogen and to secondary metabolites, thus promoting or inhibiting plant growth (Figure ). The main microorganisms associated with nitrogen fixation in the rhizosphere of specific plants include Klebsiella , Paenibacillus and Azospirillum (Grady et al ., ; Mehnaz et al ., ; Ryu et al ., ). Wang et al . ( ) pointed out that specific relationships exist between given host genetics and associated microbes. This is further supported by the results from synthetic communities (SynComs) application that can systemically regulate the transcription of genes involved in multiple facets of growth and nutrient metabolism, especially auxin responses and nutrient signalling pathways. The expression of nitrate transporter genes and nitrate reductase genes were down‐regulated in plants treated with SynComs, which improved the biological fixation of N 2 , thereby inhibiting the direct absorption of nitrogen by roots and related metabolic pathways (Wang et al ., ). Many phosphorus starvation ( PSR ) genes, phosphate transport and metabolism genes are also activated by SynComs (Wang et al ., ; Wu et al ., ). This indicates that microorganisms not only contribute to phosphorus release from insoluble forms but also activate the PSR signalling system, thereby enhancing the absorption of environmental phosphorus and promoting inter‐tissue phosphorus recycling (Desbrosses and Stougaard, ; Li et al ., ; Wu et al ., ; Zhong et al ., ). Additionally, GO analysis showed that auxin responsive genes were abundant among differentially expressed genes affected by SynComs application (Hinsinger et al ., ; Wang et al ., ). Recruited rhizosphere microbes can produce phytohormones to regulate plant flowering signalling pathways (Rodriguez et al ., ). A current focus is to decipher the link between rhizosphere microorganisms and plant functional genes on flowering time (Figure ). For one thing, microorganisms can produce IAA from tryptophan (Trp; Duca et al ., ; Molina et al ., ; Patten et al ., ; Treesubsuntorn et al ., ). One of the abundant rhizosphere microorganisms, Arthrobacter , has been reported to have the ability to produce IAA, which is beneficial for plant growth (Li et al ., ). IAA was the direct driver that down‐regulated the expression of genes involved in flowering, which delays the flowering time (Lu et al ., ; Mai et al ., ). It was also found that the selectively enriched rhizosphere microorganism regulated the flowering time by affecting the available nitrogen content in autoclaved potting‐mix soils (Ishioka et al ., ; Panke‐Buisse et al ., ). Notably, microbes also synthesize and emit many volatile organic compounds (VOCs) to perturb host flowering time (Hung et al ., ; Sánchez‐López et al ., ). VOCs are low molecular weight (<300 Dalton) molecules that are easily dispersed by air and water due to high vapour pressure and low boiling point (Bitas et al ., ; Schmidt et al ., ; Schulz and Dickschat, ). The appearance of floral buds in VOCs treated ahk2/4 and ahk3/4 plants occurred 3–4 days before non‐treated Arabidopsis . In contrast, VOCs did not exert any significant effect on the time of floral bud appearance in both ahk2/3 and 35S:CKX1 plants ( ahk2/4 , ahk3/4 and ahk2/3 are CK signalling mutants; 35S:CKX1 plants are CK oxidase/dehydrogenase1 over‐expressing plants). These findings provide strong evidence that VOCs‐promoted early flowering involves suppression of NO action through the scavenging of NO molecules by CKs (Riefler et al ., ; Sánchez‐López et al ., ; Werner et al ., ). Together, microbes synthesize a multitude of nutrients and hormones, that affect the expression of plant flowering‐related genes, and may directly or indirectly regulate plant flowering time (De‐la‐Peña and Loyola‐Vargas, ). VOCs released from rhizomicrobiome can also improve multiple functions in ecosystems, such as plant growth and development (Gutiérrez‐Luna et al ., ; Kanchiswamy et al ., ; Ortíz‐Castro et al ., ; Schulz‐Bohm et al ., ). A variety of bacteria or fungi have been identified to produce VOCs, such as Bacillus , Pseudomonas and Serratia spp. (Hassani et al ., ; Plyuta et al ., ; Xie et al ., ). Sun et al . ( ) found that treatment with F.luteovirens VOCs reduces primary root growth by aggravating auxin accumulation through the repression of the abundance of auxin efflux carrier PIN‐FORMED 2 ( PIN2 ) protein, whereas it increases the lateral root number of A. thaliana seedlings. In addition to modulating root system architecture, treatment with F. luteovirens VOCs markedly increased aboveground growth. The transcriptomic and metabolomic analyses further supported the idea that F. luteovirens VOCs regulate plant growth and development by inducing up‐ or down‐regulation of genes related to carbon/nitrogen metabolism and antioxidant defence (Sun et al ., ). Overall, these findings suggest that rhizosphere microbes can regulate plant growth‐related gene expression by mobilizing nutrients, altering plant nutrient use efficiency and producing hormones such as IAA and volatile compounds.
As sessile organisms, plants have to cope with various biotic and abiotic stress for long‐term domestication (Zhu, ). In this process, corresponding resistance genes and mechanisms have evolved continuously to resist adverse environmental conditions (Chong et al ., ; Frantzeskakis et al ., ). Recently, a paradigm shift in the life sciences has emerged, in which microbial communities are viewed as core drivers of tolerance mechanisms (Cordovez et al ., ). Beneficial microbes were recruited to build defence signalling pathways (Figure ), which are based on the interaction of plants, pathogenic bacteria and rhizomicrobiome in response to biotic stress, that is likely a survival strategy conserved across the plant kingdom (Liu et al ., ; Liu and Brettell, ). Durum wheat infected by the fungal pathogen Fusarium pseudograminearum ( Fp ) leads to an enrichment of the beneficial bacterium Stenotrophomonas rhizophila (SR80) in the rhizosphere. As an early warning factor, the assembled SR80 was able to promote the large‐scale expression of pathogenesis‐related (PR) genes involved in the salicylic acid and jasmonic acid signalling pathways, enhancing host immunity against crown rot disease (Liu et al ., , ). Another case is that VOCs produced by Fusarium culmorum stimulated the production of flavonoid sodorifen VOCs of bacterium Serratia plymuthica , which induced the expression of associated defence genes in Arabidopsis (Raza et al ., ; Schmidt et al ., ). Under abiotic stresses, the effect of rhizomicrobiome on plant genes might be more comprehensible. As for low phosphorus availability, the phosphorous starvation response induced by microbial invasion can promote the expression of master transcriptional regulator PhR1 to alter orthophosphate (Pi) metabolism in plants. PHR1 directly activates microbiome‐enhanced response to phosphate limitation while repressing microbially driven plant immune system. It suggests that microbes have changed the transcription level of defence genes to enhance the immunity of plants (Castrillo et al ., ). Furthermore, rhizomicrobiome may change more defensive pathways to mitigate abiotic stress. Inoculation of Arbuscular Mycorrhizal Fungi (AMF) under drought stress condition has been found to be able to induce the expression of 1‐pyrrolin‐5‐carboxylic acid synthase (P5CS) gene. P5CS is a key enzyme involved in the proline synthesis, which can promote cell water retention, thus improving the ability of plants to resist osmotic stress (Hu et al ., ; Ruiz‐Lozano et al ., ). At the same time, the AMF also enhance plant resistance to drought stress by regulating 9‐cis‐epoxycarotenoid dioxygenase ( NCED ) gene expression. NCED is an important enzyme in controlling abscisic acid (ABA) metabolism which catalyses the oxidative cleavage of epoxy carotenoids into xanthoxins (Chauffour et al ., ; Taylor et al ., ). As a whole, the evidence above suggests that microbes can regulate the expression of plant genes through multiple pathways, such as secreting hormones, producing VOCs, enhancing the plant immune system, and building defence signalling pathways, to help the host adapt to various stresses. These studies shed light on the important role of plant‐associated rhizosphere microbiota for plant functions and broaden multiple ideas for manipulating host growth through microbial intervention.
The link of plant functional gene to rhizomicrobiome is complicated. Not only the ‘functional genes‐downstream, genes‐metabolite or other signalling substances‐microbe’ pathway need to be considered but also factors such as inter‐microbe, environments and plant residues (Chen et al ., ; Edwards et al ., ; Gao, Han, et al ., ; Gao, Karlsson, et al ., ; Geddes et al ., ; Pascale et al ., ; Trivedi et al ., ; Trubl et al ., ). These signalling substances, such as hormones and small RNA, are frequently exchanged between the host and the microorganism, triggering structural and functional changes on both sides (Huang et al ., ; Middleton et al ., ; Yang et al ., ). The desired substantive function of microorganisms leads to healthy plant growth. The new findings suggest that the ultimate outcome of host health may depend on not only the exchange of substance between host and microbes but also the signalling and metabolic interactions among microbiome members (Durán et al ., ; Finkel et al ., ; Harbort et al ., ; Xu et al ., ). Thus, understanding plant gene‐microbe interactions may require examining these relationships at the level of host and microbial functional capacity, activity, and molecular exchange (Xu et al ., ). However, a larger reason for the slow progress in our understanding of the functionality of plant gene‐microbe interactions may lie on our choice of methods and tools. How to integratively combine host‐centric molecular technologies, such as CRISPR/Cas system (e.g., Cas9 and Cas12a), gene silencing (e.g., RNA interference) and gene overexpression, with microbial‐centric histological sequencing technologies, such as amplicon sequencing, shotgun metagenomics, metatranscriptomics and metabolomics, are important to unlock complex mechanisms of plant gene‐microbe interactions (Figure ; Fitzpatrick et al ., ). Genome‐Wide Association Studies (GWAS) well attach phytomics to microbiomics and demonstrate that host genomics does influence the composition of the microbiome (Trivedi et al ., ). GWAS were previously used to study the phyllosphere microbiome with quantitative methods to map microbiomes as phenotypes (Horton et al ., ; Oyserman et al ., ; Roman‐Reyna et al ., ; Wallace et al ., ) and are increasingly focusing on the rhizosphere microbiomes of plants such as Arabidopsis , sorghum, barley, maize and tomato (Bergelson et al ., ; Escudero‐Martinez et al ., ; Oyserman et al ., , ; Wagner et al ., ). Compared to phyllosphere, the rhizosphere has proven to be the most promising part for unravelling the genetic power of the host microbiome (Deng et al ., ). This may be owing to the high complexity of the rhizosphere microbiome and its strong colonization ability (Bano et al ., ; Rico et al ., ; Schlechter et al ., ). GWAS break down the wall of the tripartite relationship between plant genotype phenotype‐microbiome, thus identifying the link between plant phenotype and microbiome function (Horton et al ., ; Vorholt et al ., ). Therefore, the abundance and community structure of the rhizosphere microbiome can be used to infer the relevant genetic loci and plant genes (Deng et al ., ; Escudero‐Martinez et al ., ). The inferred plant genes can be validated by artificially modified mutants to elucidate the potential host genetic causes of microbiome changes (Schäfer et al ., ; Wagner et al ., ). Even if candidate genes for microbial recruitment are identified, there are difficulties in reproducing and validating them (Zancarini et al ., ). Conversely, host genotype data can also predict the composition of microorganisms, which determines whether there is inter‐ and intra‐species variation in the microbiome in the host being tested (Deng et al ., ; Fitzpatrick et al ., ; Walters et al ., ). GWAS of plant microbiome associations promote a comprehensive understanding of the host molecular mechanisms of microbiome assembly and lay the foundation for microbiome characterization to be implemented into breeding programs. Notably, heritable rhizosphere microbes showed strong overlap in different host genotypes (Deng et al ., ). This fraction may be the few pivotal microorganisms that have stabilized on the species domestication and temporal evolutionary scales and subsequently regulate the proliferation of other members of the community (Brachi et al ., ). Furthermore, since environmental conditions are a major component of variability and plant genotypes can explain only a few microbial variations, GWAS need to develop more comprehensive models to reveal the effects of genotype, microbiome, environment and their interactions on plant phenotypes (Zancarini et al ., ). For the microbial sequencing, the widely used technology is still second‐generation sequencing, which has undoubtedly enabled researchers to gain a broad understanding of the structure of microbial community (Metzker, ; Niedringhaus et al ., ). Increasingly, studies are also focusing on the activity and function of microbes by incorporating metatranscriptomics and macrogenomics, gene chips and viral omics, with the potential for more advanced approaches to be developed later (McDonald et al ., ; Wang et al ., ; Xu et al ., ; Zaramela et al ., ). Each of the technology mentioned above has its own advantages and disadvantages (Table ), and researchers can choose the appropriate method according to the needs as well as the funding budget. It is worth noting that the gene impact on rhizomicrobiome is often accomplished through metabolism or small molecule signalling substances. Therefore, metabolomics and the detection of the transfer of small molecule signalling substances (i.e., miRNAs) becomes particularly important in this reciprocal process (Middleton et al ., ; Pang et al ., ). Therefore, data integration approaches are essential to unravel the relationship between plant functional gene and microbes. Pang et al . ( ) reviewed the statistical approaches developed for integration of plant metabolites and microbiome data, and Zancarini et al . ( ) reviewed statistical approaches for integrating plant omics with microbiome data. It is certain that rapidly developing multi‐omics combinatorial analyses may further elucidate mechanisms of interaction between genes and microbiomes. A large amount of data is available based on the above methods to support the linkage between plant genes and microbial structure and function, but the validations of microbial functions are still lacking. Recently, artificial recombination has been increasingly used to validate microbial functions (Durán et al ., ; Zhuang et al ., ). We believe that the rapid development of technologies such as the high‐throughput partitioning methods developed by Zhang et al. ( ) and computer‐guided synthesis of artificial colonies could assist to explore the plant gene and microbe interaction. Specifically, there was a significant difference in root bacterial community composition between root diffusion barrier genotypes of Arabidopsis and wild type after inoculation with artificial bacterial communities (Salas‐González et al ., ). These results suggest that the endothelial diffusion component of the Arabidopsis root system regulates the conformation of the microbial community (Zhou et al ., ). Furthermore, the synthetic population repressed the transcriptional response to ABA (cluster C2) by comparing differentially expressed genes in wild‐type plants and mutant myb36‐2 roots, leading to the speculation that the microbiome regulates suberization and lignification through ABA‐dependent pathways (Barberon et al ., ; Salas‐González et al ., ). It is clear that synthetic biology is establishing transboundary links, as well as establishing molecular links between plant nutrition and defence (Berendsen et al ., ; Castrillo et al ., ; Liu et al ., ; Zhou et al ., ). However, recent developments in synthetic communities have ignored fundamental issues in microbiome studies, namely standardization and reproducibility (Zengler et al ., ). Little standardization in the culture systems is used to study complex microbial communities. Researchers typically use single strains or simple co‐cultures, which are often poor models for complex and metabolically diverse microbial communities in nature (Ruby, ). While these methods can provide a better understanding of the native microbiome, they lack reproducibility (even within a single laboratory) in the absence of microbial inoculate that are stable over time. The care needs to be taken when using metagenomes to detect endophytes in leaves, stems and roots, as cross‐contamination of host genes and microbial genes can occur (Liu et al ., ). To solve this problem, the host genome needs to be identified to remove the host gene contamination in subsequent analyses (Marotz et al ., ; Song and Xie, ). This has certainly limited much of the research from being carried out in some crops. In addition, choosing the appropriate time to sample is a challenge as there is often little or no a priori knowledge of when host and microbiome responses occur and how their interactions change as the plant grows (Xiong et al ., ). As some of the techniques involved above are expensive, we therefore recommend that researchers select multiple sampling time points and use suitable techniques to explore the dynamic processes of plant gene and microbial interactions in focal periods using a multi‐omics approach. In addition, as soil type strongly influences microorganisms (Bai et al ., ; Girvan et al ., ; Pershina et al ., ), we suggest that experiments should be verified in different soil types, so as to find general patterns of gene regulation of microorganisms. Summary and outlook Microbial communities with manipulated plant functional gene expression have great potential for bioengineering, agricultural and environmental remediation. There is clear evidence that functional plant genes and rhizomicrobiome can interact with each other. However, more fundamental studies are needed to decrypt the ‘on or off’ of functional genes in plant‐microbe communication. There is no shortage of emerging technologies and methodological approaches that can be used to further explore the molecular mechanisms and signalling pathways of microbe‐host gene interactions. However, it is still a long way to construct a complete network of plant functional genes and rhizosphere microbes, with a plethora of outstanding questions: (i) How to find plant genes that can regulate rhizomicrobiome on a large scale? GWAS may be a very good approach, considering that GWAS could be used to identify specific gene‐regulated microorganisms; (ii) To what extent and how long can plant genes play a role in shaping the rhizospheric microbiota and its associated functions? Conversely, microorganisms to specific genes? (iii) Are there other signals, such as miRNA, for the plant and microbe interaction besides metabolites? This knowledge will enable us to reshape the microbiome through genetic engineering, or to regulate the functional genes of plants through microbes, ultimately optimizing plant growth.
Microbial communities with manipulated plant functional gene expression have great potential for bioengineering, agricultural and environmental remediation. There is clear evidence that functional plant genes and rhizomicrobiome can interact with each other. However, more fundamental studies are needed to decrypt the ‘on or off’ of functional genes in plant‐microbe communication. There is no shortage of emerging technologies and methodological approaches that can be used to further explore the molecular mechanisms and signalling pathways of microbe‐host gene interactions. However, it is still a long way to construct a complete network of plant functional genes and rhizosphere microbes, with a plethora of outstanding questions: (i) How to find plant genes that can regulate rhizomicrobiome on a large scale? GWAS may be a very good approach, considering that GWAS could be used to identify specific gene‐regulated microorganisms; (ii) To what extent and how long can plant genes play a role in shaping the rhizospheric microbiota and its associated functions? Conversely, microorganisms to specific genes? (iii) Are there other signals, such as miRNA, for the plant and microbe interaction besides metabolites? This knowledge will enable us to reshape the microbiome through genetic engineering, or to regulate the functional genes of plants through microbes, ultimately optimizing plant growth.
The authors declare that they have no competing interests.
Tengxiang Lian, Jian Jin and Hai Nian designed the content and structure of the review. Qi Liu, Tengxiang Lian, Lang Cheng wrote the main manuscript. Qi Liu, Lang Cheng prepared Figures , , and Table . The authors read and approved the final version of the manuscript.
|
The state of infectious disease training in Germany before introduction of the new board certification in internal medicine and infectious diseases: past experience and future expectations
|
21c26548-2348-4f32-9c1d-f24c3c39fe58
|
10106872
|
Internal Medicine[mh]
|
To date, in Germany certified infectious diseases (ID) training according to federal standard medical training regulations was available only as an additional training after specialization in, e.g. internal medicine by certification of federally organized regional medical associations. The basic conditions for certified additional ID training are the title as medical specialist of another medical field and at least 12 months of ID training supervised by an appointed instructor. Of those, 6 months must be completed in inpatient or outpatient ID service and an additional 6 months may be served in a related field such as Infection Control and Prevention or microbiology. An independent sub-specialization in ID as a separate internal medicine focus, as is common in Germany for cardiology or gastroenterology, for example, had not been established yet. For this reason, the German Society for Infectious Diseases ( Deutsche Gesellschaft für Infektiologie , DGI) introduced its own professional designation, the “Infectious Diseases Specialist (DGI)” in 2002, in order to enable further training in ID in Germany that is comparable with international standards. To obtain this certificate, the candidate must be a member of the DGI, have been a specialist in another field (e.g. internal medicine) for at least three years and have worked in an ID or a related field for at least three years. If these three years of employment have not taken place at an ID department certified by the DGI (so-called DGI center), it is necessary to achieve 250 ID-specific continuing medical education points (iCME) of the German ID Academy within 5 years. A detailed description of the further training options can be found in Table S1 in the supplements. Apart from the additional training in ID certified by the regional medical associations, a further possibility for a nationwide uniform certification of ID training was created in this way. This opportunity for further training was frequently exploited at DGI centers, which are able to cover a certain standard of ID provision and care. Naturally, this has also led to an aggregation of additional training opportunities in ID at DGI centers. Of note, it is also possible to achieve the DGI ID specialist title, even if one is not working at a DGI center by extracurricular activities. The only federal state that already had a board certification in ID for several years is Mecklenburg-Vorpommern. Regarding such differences at federal and state levels, it has to be said that the state level only makes a recommendation, and the practical implementation takes place at the federal level. At the latest with the outbreak of the coronavirus diseases 2019 (COVID-19) pandemic, it became obvious that ID specialists play an important role in patient care in modern medicine in Germany. In 2015, there were about 500–600 ID specialists in Germany, with about 300 of them providing direct patient care . Approximately, 40 physicians complete ID training in Germany each year . Even before the COVID-19 pandemic, an intensification of ID training has been demanded for years in order to meet the existing need . At least partially, the call for an independent sub-specialization in ID was intended to raise the attractiveness of further training in ID and to attract young medical professionals to the field . Finally, at the 124th German Medical Congress in May 2021 an independent sub-specialization in internal medicine and ID was approved . With the recent revision of the federal standard medical training regulations, the introduction of this new ID sub-specialization training program has been initiated but has not yet been implemented or approved by all federally organized regional medical associations. In this time of transition, the Young Professionals section (“Young DGI”) of the DGI conducted a survey among its members mostly consisting of ID residents and specialists, and further interested persons. The survey’s goals were to assess past and current experiences, and expectations and desires for the future of training in ID.
In peer review discussions, four relevant areas were identified concerning training in infectious medicine: Current training and satisfaction, compatibility of family and career, opportunities for science and research, aspirations and expectations for the new specialization and the future curriculum. These areas were covered with 59 questions (46 multiple-choice questions, six multiple-select questions, five open questions and two grading questions [scale 1–6]). Depending on the training status and response, different questions were presented for answering through logical linkage. The questionnaire was created as a voluntary anonymous survey using Microsoft Forms (Microsoft Corp., released 2016, Redmond, WA, USA). Consent to data publication had to be confirmed beforehand by the participants. The questionnaire was evaluated during a pre-test phase. Therefore, the survey was passed on to three colleagues randomly selected from each educational group (student, resident or ID specialist), and asked to complete the survey in advance and to look out for and report any errors or inconsistencies in terms of content and form. The survey was then reviewed and finalized. The primary call for participation in the survey was distributed among members of the Young DGI via digital networks (email and social media channels) and promoted as a survey of ID training in Germany. In addition, every recipient of the call for participation was requested to forward the survey as often as possible to all potentially interested persons at their own discretion, following the spirit of a snowball principle. No further selection of participants was made, and no one was excluded from completing the survey. No restrictions regarding age, educational status, or other aspects were integrated. To classify the results, university locations with professorships for infectiology, DGI centers, trainers for infectiology and the current medical statistics of the German Medical Association were determined by means of relevant websites and considered in relation to the study data. Statistical analysis Statistical analysis was performed using IBM SPSS Statistics for Windows (IBM Corp. Released 2020, Version 27.0. Armonk, NY, USA). Continuous variables were summarized as mean or median ± standard deviation, and categorical variables were presented as number and percentage. Comparisons of study cohort characteristics were performed via 2-sided t -tests and nonparametric Mann–Whitney- U test for values not normally distributed for continuous variables and χ 2 tests (Pearson and Fisher’s exact test) for categorical data. A one-way analysis of variance (ANOVA) was performed for comparisons of continuous variables of at least two independent samples for parameters normally distributed and for parameters not normally distributed a non-parametric Kruskal–Wallis Test was performed. Differences were considered significant at p < 0.05 with a confidence interval (CI) of 95%.
Statistical analysis was performed using IBM SPSS Statistics for Windows (IBM Corp. Released 2020, Version 27.0. Armonk, NY, USA). Continuous variables were summarized as mean or median ± standard deviation, and categorical variables were presented as number and percentage. Comparisons of study cohort characteristics were performed via 2-sided t -tests and nonparametric Mann–Whitney- U test for values not normally distributed for continuous variables and χ 2 tests (Pearson and Fisher’s exact test) for categorical data. A one-way analysis of variance (ANOVA) was performed for comparisons of continuous variables of at least two independent samples for parameters normally distributed and for parameters not normally distributed a non-parametric Kruskal–Wallis Test was performed. Differences were considered significant at p < 0.05 with a confidence interval (CI) of 95%.
Between December 2021 and February 2022, 307 participants voluntarily completed the survey. Seven participants did not confirm their consent to publish the data and were therefore excluded from the analysis (Fig. ). Study population In the total study cohort, the median age was 42 years (IQR 22–72) and more males participated (59.0%, Fig. ). Approximately, one-fourth (24.0%, n = 72) of the participants were students or residents. Regarding carrier level, men (71.2%, 37/52) were more albeit not significantly likely to be department chief ( p = 0.149). The women at this level were approximately younger than the male participants p = 0.032, Fig. ). More men than women worked in private practice (24 vs. 9, p = 0.009). The geographic distribution of participants revealed concentrations in some regions such as North Rhine-Westphalia and Berlin (Fig. ), which however correlate with population density. The comparison with data derived from German Medical Association demonstrates that a representative sample of participants for, e.g. North Rhine and Westphalia-Lippe is depicted. As indicated in the graphical presentation (Fig. ), the spatial distribution of study participants also follows the uneven distribution of ID specialists and ID training centers across Germany. Education and working areas of participants The results showed that 15.6% (47/300) of the participants were in their residency training and 4.3% (13/300) in the first two years of their training. Out of the 300 participants, 51.3% ( n = 119) worked at a university hospital with the main focus on patient care (33.7%, n = 101) or on science (6.0%, n = 18). Furthermore, 18.3% ( n = 55) of the participants worked at tertiary maximum care hospitals, 13.7% ( n = 41) at district or municipal hospitals, and 10.7% ( n = 32) in private practice or in an ambulatory medical care center (3.3%, n = 10). The majority of the participating specialists were specialized in internal medicine (76.0%, n = 228), many of whom had acquired additional training in ID (Fig. ). Roughly one-third of the participants (37.3%, 112/300) aspired to become an ID specialist via the newly developed residency, and 10.7% (32/300) planned to complete a residency in a different specialization, mostly in ID-related fields like microbiology, virology, or Infection Control and Prevention (7.0%, n = 21). Regarding the additional training in ID, 38.9% of the participants (114/293) already completed it and a further 31.4% (92/293) planned to finish it (Fig. b). Every tenth participant (10.0%, 29/293) indicated lack of support for completion of the additional training in ID from their workplace. Currently, additional training in ID is only accessible to specialists in another medical field (e.g. internal medicine). Of those specialists participating in our survey, 50.9% (113/222) had obtained the degree in the frame of the additional training in ID and 44.6% (99/222) had the DGI certification as ID specialist. Both certificates (ID degree and DGI certification) had been obtained by 33.8% (75/222) of specialists. Only in 11.3% (25/222) cases, the DGI certificate alone has been obtained. Residents in comparison to the group of specialists rarely planned to complete the DGI certificate ( p = 0.002, Fig. ). Regarding the planned training goals in ID, there was no relevant difference in the group of students ( p = 0.115, ID specialist versus DGI certificate). If additional training in ID has already been completed, 22.1% (25/113) of the ID specialists would not choose to redo this training, 18.6% (21/113) would do it again, and 54.0% (61/113) were concerned about the future recognition of the additional training degree, and therefore additionally planned to complete the sub-specialization in ID. Interests in related additional training and specialization Regarding further education in the related field of tropical medicine, nearly half of the participants were not interested in additional training in tropical medicine (47.4%, 139/293) or in the Diploma in Tropical Medicine and Public Health (45.1%, 132/293). On the other hand, 5.1% (15/293) of the participants already completed the additional training and 12.6% (37/293) already completed the Diploma (Fig. ). Personal educational goals The question whether participants felt well prepared for their future work-life in ID after their ID training was answered by 57.3% (172/300) of them. Of those, 37.2% (64/172) felt that they were well prepared, 41.3% (71/172) felt rather well prepared, and 8.7% (15/172) felt rather poorly prepared, respectively. 12.8% (22/172) of this subgroup could not assess the situation. Antimicrobial stewardship and microbiology With regard to antimicrobial stewardship (AMS), 19.1% of the participants did not plan to attain the certification as “AMS Expert”, 17.1% were indecisive, 31.4% planned to achieve it, and 32.1% already completed certified further training in AMS, respectively (Fig. ). It became evident that the AMS certificate was less frequently intended in the group of ID specialists in private practice ( p = 0.019) and among the group of department chiefs ( p < 0.001) compared to students, residents and ID specialists in general (Fig. ). The question whether an ID specialist should in general be recognized as an AMS expert was answered with yes in 20.0% of cases, with no in 10.3% and 64.0% of them answered with yes in case the content of AMS courses would be integrated into the specialist training in ID, respectively (Fig. ). With regard to whether training in microbiology and virology (from here on summarized to microbiology) should be included in the new ID specialist curriculum, a total of 84.6% (254/300) would prefer an inclusion of at least 3 to a maximum of 12 months (Fig. ). Compatibility of family and career About compatibility of family and career: only 30.8% (53/172) of participants reported that they were sufficiently supported by their employer or that it was possible to take care of their child/children without outside help or support from the employer (Fig. ). On the other hand, 19.8% (34/172) of participants felt that their employer’s support would not reach far enough and was only accessible for a few persons, whereas 23.3% (40/172) and 26.2% (45/172) of the participants needed help from family members, friends or child-care professionals, or think that the compatibility can be improved considerably, respectively. Approximately one-third (36.3%) of participants stated to be a parent and reported on parental leave. Of those, significantly more women had taken parental leave ( p = 0.005, Fig. ). This difference was most evident in the subgroup of department chiefs. Here, only 21.0% of men and 40.0% of women decided to take parental leave ( p = 0.024, Fig. ). Compatibility of science and clinical work Almost two-thirds of the participants (62.0%, 186/300) aspired to work in research although in 12.0% (36/300) the employer would not expect a scientific engagement. 26.0% (78/300) do not want to work in research and 2.7% (8/300) of the participants even though their employer required them to do so. 12.0% (36/300) were indecisive. The scientific commitment was supported for example with granted time off (fully or mostly) in 51.7% (89/172) of cases. On the contrary, their employer mostly not or never supported 26.7% (46/172) of the participants. 20.9% (36/172) by a senior scientist. 62.8% (108/172) of the participants were supervised by a senior scientist most of the time or always, but supervision was lacking in 20.9% (36/172). No significant difference between men and women was found regarding support for research (data not shown). Encouragement of scientific research was equally distributed in DGI centers and non-DGI centers ( p = 0.677) as well as predominantly perceived as appreciated by residents and specialists/department chiefs ( p = 0.303). Likewise, sufficient supervision by experienced scientists was reported equally in both subgroups.
In the total study cohort, the median age was 42 years (IQR 22–72) and more males participated (59.0%, Fig. ). Approximately, one-fourth (24.0%, n = 72) of the participants were students or residents. Regarding carrier level, men (71.2%, 37/52) were more albeit not significantly likely to be department chief ( p = 0.149). The women at this level were approximately younger than the male participants p = 0.032, Fig. ). More men than women worked in private practice (24 vs. 9, p = 0.009). The geographic distribution of participants revealed concentrations in some regions such as North Rhine-Westphalia and Berlin (Fig. ), which however correlate with population density. The comparison with data derived from German Medical Association demonstrates that a representative sample of participants for, e.g. North Rhine and Westphalia-Lippe is depicted. As indicated in the graphical presentation (Fig. ), the spatial distribution of study participants also follows the uneven distribution of ID specialists and ID training centers across Germany.
The results showed that 15.6% (47/300) of the participants were in their residency training and 4.3% (13/300) in the first two years of their training. Out of the 300 participants, 51.3% ( n = 119) worked at a university hospital with the main focus on patient care (33.7%, n = 101) or on science (6.0%, n = 18). Furthermore, 18.3% ( n = 55) of the participants worked at tertiary maximum care hospitals, 13.7% ( n = 41) at district or municipal hospitals, and 10.7% ( n = 32) in private practice or in an ambulatory medical care center (3.3%, n = 10). The majority of the participating specialists were specialized in internal medicine (76.0%, n = 228), many of whom had acquired additional training in ID (Fig. ). Roughly one-third of the participants (37.3%, 112/300) aspired to become an ID specialist via the newly developed residency, and 10.7% (32/300) planned to complete a residency in a different specialization, mostly in ID-related fields like microbiology, virology, or Infection Control and Prevention (7.0%, n = 21). Regarding the additional training in ID, 38.9% of the participants (114/293) already completed it and a further 31.4% (92/293) planned to finish it (Fig. b). Every tenth participant (10.0%, 29/293) indicated lack of support for completion of the additional training in ID from their workplace. Currently, additional training in ID is only accessible to specialists in another medical field (e.g. internal medicine). Of those specialists participating in our survey, 50.9% (113/222) had obtained the degree in the frame of the additional training in ID and 44.6% (99/222) had the DGI certification as ID specialist. Both certificates (ID degree and DGI certification) had been obtained by 33.8% (75/222) of specialists. Only in 11.3% (25/222) cases, the DGI certificate alone has been obtained. Residents in comparison to the group of specialists rarely planned to complete the DGI certificate ( p = 0.002, Fig. ). Regarding the planned training goals in ID, there was no relevant difference in the group of students ( p = 0.115, ID specialist versus DGI certificate). If additional training in ID has already been completed, 22.1% (25/113) of the ID specialists would not choose to redo this training, 18.6% (21/113) would do it again, and 54.0% (61/113) were concerned about the future recognition of the additional training degree, and therefore additionally planned to complete the sub-specialization in ID.
Regarding further education in the related field of tropical medicine, nearly half of the participants were not interested in additional training in tropical medicine (47.4%, 139/293) or in the Diploma in Tropical Medicine and Public Health (45.1%, 132/293). On the other hand, 5.1% (15/293) of the participants already completed the additional training and 12.6% (37/293) already completed the Diploma (Fig. ).
The question whether participants felt well prepared for their future work-life in ID after their ID training was answered by 57.3% (172/300) of them. Of those, 37.2% (64/172) felt that they were well prepared, 41.3% (71/172) felt rather well prepared, and 8.7% (15/172) felt rather poorly prepared, respectively. 12.8% (22/172) of this subgroup could not assess the situation.
With regard to antimicrobial stewardship (AMS), 19.1% of the participants did not plan to attain the certification as “AMS Expert”, 17.1% were indecisive, 31.4% planned to achieve it, and 32.1% already completed certified further training in AMS, respectively (Fig. ). It became evident that the AMS certificate was less frequently intended in the group of ID specialists in private practice ( p = 0.019) and among the group of department chiefs ( p < 0.001) compared to students, residents and ID specialists in general (Fig. ). The question whether an ID specialist should in general be recognized as an AMS expert was answered with yes in 20.0% of cases, with no in 10.3% and 64.0% of them answered with yes in case the content of AMS courses would be integrated into the specialist training in ID, respectively (Fig. ). With regard to whether training in microbiology and virology (from here on summarized to microbiology) should be included in the new ID specialist curriculum, a total of 84.6% (254/300) would prefer an inclusion of at least 3 to a maximum of 12 months (Fig. ).
About compatibility of family and career: only 30.8% (53/172) of participants reported that they were sufficiently supported by their employer or that it was possible to take care of their child/children without outside help or support from the employer (Fig. ). On the other hand, 19.8% (34/172) of participants felt that their employer’s support would not reach far enough and was only accessible for a few persons, whereas 23.3% (40/172) and 26.2% (45/172) of the participants needed help from family members, friends or child-care professionals, or think that the compatibility can be improved considerably, respectively. Approximately one-third (36.3%) of participants stated to be a parent and reported on parental leave. Of those, significantly more women had taken parental leave ( p = 0.005, Fig. ). This difference was most evident in the subgroup of department chiefs. Here, only 21.0% of men and 40.0% of women decided to take parental leave ( p = 0.024, Fig. ).
Almost two-thirds of the participants (62.0%, 186/300) aspired to work in research although in 12.0% (36/300) the employer would not expect a scientific engagement. 26.0% (78/300) do not want to work in research and 2.7% (8/300) of the participants even though their employer required them to do so. 12.0% (36/300) were indecisive. The scientific commitment was supported for example with granted time off (fully or mostly) in 51.7% (89/172) of cases. On the contrary, their employer mostly not or never supported 26.7% (46/172) of the participants. 20.9% (36/172) by a senior scientist. 62.8% (108/172) of the participants were supervised by a senior scientist most of the time or always, but supervision was lacking in 20.9% (36/172). No significant difference between men and women was found regarding support for research (data not shown). Encouragement of scientific research was equally distributed in DGI centers and non-DGI centers ( p = 0.677) as well as predominantly perceived as appreciated by residents and specialists/department chiefs ( p = 0.303). Likewise, sufficient supervision by experienced scientists was reported equally in both subgroups.
This paper reports on a cross-sectional survey that examined how ID training was conducted in Germany before the ongoing transition of the current training standards. The survey captured past experiences with ID training, and expectations and desires for how the training should be shaped in the future. The survey’s study population mainly comprised clinicians working at university hospitals, tertiary hospitals, and ID centers, which is in line with the organization of ID care and training in Germany. Medical students, residents and specialists in private practice had proportionally low representations in the survey, which might possibly be explained by a lower level of involvement in the Young DGI section and the fact that the survey was distributed within individual networks of participants (Fig. ). Hence, it seems possible that these groups were less frequently reached by the call for survey participation. Nevertheless, a representative sample was found for some federal states so that a reliable opinion can be assumed overall (Fig. ). The majority of the survey participants were internal medicine specialists or internal medicine specialists with additional training in ID, with more than 75% of the participants belonging to this group. Most of the participants had either completed the certified additional training in ID or aimed to complete it, indicating a high acceptance of ID training among participating colleagues. The participation of many colleagues who have already completed ID training or even supervise it themselves is a strength with regard to the evaluation of the current ID training, as they have a very good insight into existing structures. On the other hand, due to the lower participation of younger colleagues, we cannot be certain to have comprehensively assessed their wishes and needs for ID training. However, it could be an indication that networking and interest in contributing to the improvement of the training are not yet very prevalent in this group and should be promoted. Notably, almost every tenth participant was interested in further qualification in ID, despite their employer not supporting such training. This fact should be seen as an opportunity to attract residents interested in further training to ID centers. Almost one in five would rather opt for a specialist with sub-specialization than for additional training in ID during further training, probably because the new sub-specialization in ID is seen as the more desirable degree for the high proportion of internal medicine specialists among the survey participants. The new specialist training offers the direct and in-depth acquisition of the residency in ID sub-specialization. The DGI ID specialist was not equally distributed, with residents being less interested in completing the professional designation. Students were equally likely to be interested in completing additional training in ID and the professional designation, suggesting that students might be not fully informed about the different training opportunities, the current status and therefore unable to clearly discriminate between them. Correspondingly, Schneitler et al. showed in a survey of medical students and young doctors that knowledge about possibilities for postgraduate training paths should be generated at the university level to attract for special disciplines . In conjunction with the available data, this shows that interest and possible training paths in the discipline should already be promoted at the university level. Therefore, efforts should be made to improve the visibility and accessibility of relevant training opportunities, especially for students who are interested in pursuing a career in infectious diseases . Residents were probably more likely to be informed about the planned cancellation of the DGI’s ID specialist . The survey found that an overwhelming percentage of participants were committed to research, with many receiving support from senior researchers and off time for science. Moreover, the authors noted that it was no significant difference in the perceived support for scientific involvement between the respondents from recognized ID centers and other hospitals. This observation implies that the quality of support for scientific research and involvement may not be correlated with the institutional status of the hospital or medical center. Instead, the level of support for research and scientific career could be a function of personal factors, such as the mentorship of senior researchers or the availability of research opportunities in the respective hospital or medical center. Further research could shed more light on these factors and help improve the support for scientific career development in the field of infectious diseases. No difference between men and women was found in the research area. It remained unclear why the literature and our data differed here, so this should be addressed in further research . Many participants work at university hospitals and maximum care providers, so the high proportion of people interested in research does not seem unusual. Since research is important for the further development of the discipline, the certified broad support should continue to be granted here and become a fixed component of continuing education. One in ten ID specialists in the survey reported that he felt he had not received good training to become an infectious disease specialist. Might be, the reason is the missed curriculum at the workplace. In addition to the evaluation of the training institution, this is also to be evaluated with regard to the short training period that was applied in ID training. This fact will certainly be balanced out with the new specialist training so that an increase in training satisfaction can be assumed. Overall, the ID training leaves a good impression with the respondents and different factors not recorded, such as individual support, equipment of the training center, case composition, and support of extracurricular training opportunities could be decisive for this. In any case, this good result should be a reason to undertake efforts to maintain a high level of satisfaction with the quality of ID training in Germany and perhaps to improve it even further. Even though our survey did not capture the exact family situation of the participants, a large proportion of them are concerned about the compatibility of family, patient care and scientific career. For approximately between 50% (women) and 70% (men) of the respondents, there was either the feeling that their employer did not provide them with sufficient support in balancing work and family or that they had to call on additional private or professional support, e.g. for childcare (Fig. ). There was no difference between respondents of DGI centers as core training sites and other sites. Not surprisingly, in terms of gender equality this survey also showed that among respondents with children, women had taken parental leave more frequently and for longer time periods than their male colleagues. This difference was particularly striking in the subgroup of department chiefs, who in addition were in any case much more likely to be male. This is in line with the literature already stating that building a family impacts on career opportunities of women . Furthermore, female participants, stated more often that an interruption in further education would have a negative impact on their careers. Regarding the number of female students in medicine these statements show that not only the immediate promotion of young talents is important in order to sustainably invest in the preservation and expansion of a sufficient number of qualified physicians for the field of ID medicine in Germany, but also to promote the compatibility of career and family across gender boundaries . This is even more important because many women, in particular, are lost to the physician's professional life over the course of their careers. For this purpose, modern working models such as part-time work, parental leave, on-the-job training and similar measures must be introduced into the daily work routine, and disadvantages in further training and career due to parenthood must be reduced. Efforts should be made by educators and employers to increase the compatibility of dedicated clinical and scientific work and family and to prevent an exodus from academic professions to other fields. As in many other occupational groups in Germany men should be encouraged to take parental leave to support the career paths of their partners. Currently, many regional medical associations are in the process of implementing and introducing a new sub-specialization in internal medicine and ID. Hopefully, this option will soon be available throughout the entire country. A linkage of the new qualification, e.g. with organizational indicators for outpatient or inpatient care is to be expected. Many of the specialists already decided to train in ID during their studies (31.4%) or during their further training as specialists (34.8%), so that it is clear here that junior staff should be recruited at the early stages of the professional career. Obviously, investment in exciting and dedicated ID education should be seen as an investment into the promotion of new talent in order to attract new physicians to the field of ID . The importance of new ID specialists in Germany should be discussed, as well as the need to integrate training in AMS and microbiology. One argument put forth is that the integration of AMS and microbiology rotations into the training curriculum would be beneficial as both disciplines touch on or include relevant aspects of infectious diseases . Jippes et al. showed that the successful implementation of a new postgraduate training should also take regional factors into account, and here the question of the increasing centralization of infection diagnostics certainly includes that a compulsory rotation can be mapped in a meaningful way . This issue must be addressed critically and constructively, to ensure that the training is not only theoretically valuable but also practical and implementable . Wijk et al. reported that for a successful implementation of a programme it is necessary to form coalitions with others in addition to visions; this is particularly appropriate in the question of microbiology and clinical pharmacy, so that the interdisciplinary exchange is strengthened for all those involved in the project. Further for a successful implementation it is necessary that the educators have enough time and money to create a programme . Overall, the data show that further training in ID was characterized by satisfaction; this is the result of a long-established structure, and the new introduction of the specialist should be as harmonized as possible between the state medical associations. However, attention should also be paid to creating a good interim arrangement between the curricula; the data from Fokkema et al. show that this requires partial support . This survey has some limitations. Concerning the representativeness of the data, due to the chosen study method, it cannot be ruled out that mainly thematically interested participants responded, although it could be demonstrated in relation to the Federal Medical Association data that a quantitatively representative sample was certainly achieved in relation to some federal states. The study was carried out on the specialist qualification for internal medicine and ID so this might have influenced the answers with regard to the additive further training. By distributing the survey via the digital network of the Young DGI section and defining the main thematic focus, a willingly accepted preselection of participants occurred, so that mostly individuals who had at least partially completed their ID training in Germany participated. Also in relation to data published by the German Medical Association in 2021, it is probable that proportionally more senior physicians participated in this survey than are proportionally available in the general medical community, although the specialist specification corresponds to the national level in the ranking . The high participation of already trained ID specialists is both a strength and a weakness of the survey, as they have a good insight into existing structures and curricula. However, the conclusiveness with regard to the wishes and ideas of students and young professionals is limited. With regard to the field of research education and development, our survey was not addressed to cover fully the compatibility of research and clinical practice. Therefore, it was not possible to sufficiently depict this complex field with a high level of satisfaction.
The collected data highlight significant uncertainty in the recognition of previous degrees. It is evident that the inclusion of theoretical content, such as AMS, in the curriculum is essential for future specialists. The interdisciplinary nature of infectiology is demonstrated by the desire of many participants to include microbiology in their rotations, despite potential challenges with centralization of infection diagnostic services. It is crucial to discuss interdisciplinary concepts early on to ensure that qualifications are met adequately. Additionally, the study revealed that it is challenging for the younger generation to navigate the various training paths available, emphasizing the need to provide early guidance. Moving forward, it is essential to work towards gender equality in both clinical and private practice settings by improving the balance between family and career. Overall, the survey suggests a generally high level of satisfaction with the quality of ID training in Germany, but efforts should be made to maintain and improve it further.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 19 KB)
|
A phenomenology of direct observation in residency: Is Miller's ‘does’ level observable?
|
9f72dc31-f025-49ef-8566-835645a64f85
|
10107098
|
Family Medicine[mh]
|
INTRODUCTION Direct observation (DO) is a cornerstone of competency‐based medical education (CBME); it is at the heart of workplace‐based assessment (WBA) with its formative and summative purposes. , Yet the uptake of DO in postgraduate medical education (PGME) is poor. , , The literature on DO in PGME provides ample explanations for this poor uptake, such as unclear stakes, fear of assessment, difficulties in interacting with patients and expectations concerning both autonomy and efficiency that conflict with asking for, or offering, observation. , , , , , , , , , One important recurring finding is the ‘observer effect’. As Ladonna and colleagues found, observed residents felt as if they were ‘staging a performance’; they behaved less naturally towards patients and ‘they exchanged their ‘usual’ practice for a ‘textbook’ approach’. Feedback on this ‘inauthentic behaviour’ was not considered useful by these residents. There can be no doubt that inauthentic behaviour is a serious threat to the value of DO as ‘an assessment of “does” at the top of Miller's pyramid for assessing clinical competence’. , Assessment of the ‘does’ level is an assessment of the learner's ability to function independently in clinical situations. In their guidelines on DO, Kogan and colleagues recognised that ‘learners may default to inauthentic practice when being observed (e.g., not typing in the electronic health record when taking a patient history or doing a comprehensive physical exam when a more focused exam is appropriate)’. But the authors counter this problem by stating that ‘observers should encourage learners to “do what they would normally do” so that learners can receive feedback on their actual work behaviour’. To make this easier, according to the same guidelines, supervisors, while physically being in the situation, should be as little present as possible, for example, by sitting out of the patient's line of sight. This take on the supervisor's role, however, seems to conflict with our recent phenomenological research on patients' experiences in DO situations in general practice (GP) training. Patients, for several reasons, needed—and indeed caused—supervisors to participate in the conversation to some extent. One of those reasons was that patients needed the senior's approval of the junior's approach. Taking this seriously would imply a movement in the opposite direction, where a supervisor judiciously participates in the activity, rather than keeping out of it as much as possible. This contrasting insight, however, is supported by only one study, on one perspective, that is, that of patients. Importantly, we obtained our understanding of the patients' perspective by following a phenomenological approach, meaning that we investigated regularities in what patients essentially experienced in DO situations. As Veen and Cianciolo advised, when facing persistent problems in medical education (such as the lack of DO in most training contexts), we should take a philosophical approach that ‘empowers us to slow down when we should, thereby engaging us more directly with our subjects of study, revealing our assumptions, and helping us address vexing problems from a new angle’. Phenomenology is such an approach; it enabled us to see the discrepancy between what patients needed in DO situations and how medical education conceptualises DO. , A similar phenomenological understanding of residents'(and supervisors') experiences in DO situations is lacking and needed to find answers to the questions that have arisen, based on a more complete understanding of DO situations from all perspectives. We, therefore, followed a phenomenological approach to investigate the regularities in how residents essentially experienced working with a patient while a supervisor was physically present, observing them. As indicated, DO is central to WBA. , A phenomenological approach, however, implicates investigating phenomena without pre‐defining them, in terms of their purposes for example. We therefore investigated DO situations as defined in the research question above, regardless of the purposes or other definitions of DO.
METHODS 2.1 Phenomenological approach We performed a phenomenological interview study in one Dutch GP training centre. Medical education literature often distinguishes interpretive (or hermeneutic ) phenomenology from descriptive (or transcendental ) phenomenology. , However, Rietmeijer and Veen proposed that, rather than subscribing to a specific school, authors should make clear how they understand phenomenology and how they applied principles of phenomenology in their study. We now describe these principles and the methods we used. 2.1.1 Common structures in pre‐reflective experience We investigated what residents experienced in DO situations before they had reflected on these situations: the ‘pre‐reflective experience’. Although investigating this pre‐reflective experience is an unattainable ideal, our goal was to learn what participants' reflections, ideas and opinions revealed about the common structures of this pre‐reflective experience. These common structures are also called regularities, or invariant structures, or essences of the experience. , , 2.1.2 Open, theory‐free; bracketing We deliberately started this study without a theory on DO situations, for example, in terms of participants' roles, methods or goals. We focused on how the situation in itself occurred to residents. In line with phenomenological principles, this open approach enabled us to see aspects of the phenomenon that would remain unnoticed had we pre‐defined it and narrowed our object of interest. In order to attain this openness, we had to ‘bracket’ (= suspend) our ‘natural attitude’ towards our object of investigation. , , , , With a natural attitude, we would take our assumptions about relations between the resident, the patient, the supervisor and the DO situation for granted. In other words, before starting the interviews, we would already have predefined DO situations, for instance, as a teaching event, with particular roles for all the participants. With a phenomenological attitude, by contrast, we are precisely interested in participants' experiences of these relationships between themselves, the other participants and the situation. , , , , Although reflexivity on one's assumptions is common in all qualitative research, in phenomenology, bracketing goes further than that and means suspending theoretical and conceptual ideas that may narrow one's sight of the phenomenon. This is often referred to with Husserl's dictum ‘to the things themselves’. Researchers must, therefore, constantly be aware of both their own natural attitude and the natural attitude of the interviewees. Bracketing was, consequently, equally important during the interviews and the analysis of them. This entailed constantly suspending opinions and theories about DO that arose and bringing them back to what they revealed about how residents experienced DO situations and what were the common structures of this experience. Before starting the interviews, CBTR and SCMvE each wrote an essay on what they thought were important aspects of the experience of being the resident in a DO situation. In these essays, they also reflected on their natural attitude, what they tend to take for granted regarding DO situations, including findings from their previous research on DO. , , They subsequently interviewed one another about these essays and used these reflections as the start of a reflexive diary and further memo writing throughout the interview and analysis period. As one example of what this exercise revealed, it appeared that both researchers were convinced that a junior doctor must learn from a senior doctor, with DO playing a role. However plausible this seems, by deliberately suspending this and other opinions/theories (e.g., as described in the introduction), they tried to become more sensitive in their interviews to see also other aspects of DO situations that contributed to residents' pre‐reflective experience. 2.2 Context We performed our interviews in the western part of the Netherlands. Dutch GP training is a competency‐based, 3‐year training programme; residents spend their first and final years in GP, working under the nearby supervision of one—sometimes two alternating—GP trainers. Residents visit their academic training institute 1 day each week for their day release programme. Supervisors and residents are increasingly encouraged by the training institute to engage in regular bi‐directional DO sessions, taking turns being the doctor or the observer, during patient care. The take‐up of this advice in practice, at the time of our interviews, was growing but still moderate. The authors did not work at the training institute and had no relationship with the residents interviewed. 2.3 Data collection In 2021, we sent an email invitation to a total of 30 first‐ and third‐year residents, randomly chosen, to be interviewed about their experiences in DO situations. Those who accepted were interviewed by either SCMvE or CBTR. The interviews took place via video calls because physical encounters were restricted because of the Covid 19 pandemic. The interviews were unstructured in the sense that there were no pre‐fixed questions other than the opening question: ‘Can you tell me about a situation in which your supervisor was present in the room, observing you while you were working with a patient?’ However, our aim of understanding the how of the experience did influence the type of questions that we used: We followed van Manen by deliberately looking for his ‘existential elements of experience’. Van Manen claims that people experience things in their body (e.g., what they feel and what they do), in time (e.g., what happens when and how fast the time goes), in place (e.g., who sits where and position of furniture) and in relationship (e.g., familiarity with the patient and quality of the training relationship). To get to the how of the experience, we asked quite factually what happened in specific DO situations, guided by these existential elements of experience. , The interviews varied in length from 60 to 75 min. 2.4 Analysis The interviews were videotaped. Both CBTR and SCMvE first—separately—analysed the video recordings holistically by capturing in one or two phrases what this interview told them about our topic (i.e., ‘sententious phrases’ [van Manen]). They then transcribed and anonymised the interviews, and CBTR analysed these transcriptions in four rounds of coding through Van Manen's different lenses of lived body, lived space, lived time and lived relationship. Using these four lenses made us more sensitive to all these aspects of the experience. It was an important step in the analysis. The aim of this, however, was to gain a more complete picture of the experience, not to describe the experience in four categories. Therefore, in the results section, we will not report on these existential elements but will break down the pre‐reflective experience in recurrent, or common, structures. , , CBTR grouped the codes by interpreting what they seemed to reveal about specific common structures of the pre‐reflective experience (e.g., ‘residents' awareness of the supervisor as an assessor’). He determined these common structures through a process of ‘imaginative variation’. Imaginative variation means asking oneself if the experience would still be the same experience without this structure. If the answer was no, it was a common structure. , CBTR wrote reflexive memos during this process. He then sent all this material to SCMvE who read the transcripts herself, commented on codes, code groups and memos, and added more codes and memos. SCMvE and CBTR discussed their findings during video calls, after each interview. After three and six interviews, PWT joined them in a meeting to review the analyses thus far. A further review of the analyses took place in two meetings with the whole team, including MV, AHB, HEvdH and FS, who commented on examples of codes, code groups and memos and on the system of analysis.
Phenomenological approach We performed a phenomenological interview study in one Dutch GP training centre. Medical education literature often distinguishes interpretive (or hermeneutic ) phenomenology from descriptive (or transcendental ) phenomenology. , However, Rietmeijer and Veen proposed that, rather than subscribing to a specific school, authors should make clear how they understand phenomenology and how they applied principles of phenomenology in their study. We now describe these principles and the methods we used. 2.1.1 Common structures in pre‐reflective experience We investigated what residents experienced in DO situations before they had reflected on these situations: the ‘pre‐reflective experience’. Although investigating this pre‐reflective experience is an unattainable ideal, our goal was to learn what participants' reflections, ideas and opinions revealed about the common structures of this pre‐reflective experience. These common structures are also called regularities, or invariant structures, or essences of the experience. , , 2.1.2 Open, theory‐free; bracketing We deliberately started this study without a theory on DO situations, for example, in terms of participants' roles, methods or goals. We focused on how the situation in itself occurred to residents. In line with phenomenological principles, this open approach enabled us to see aspects of the phenomenon that would remain unnoticed had we pre‐defined it and narrowed our object of interest. In order to attain this openness, we had to ‘bracket’ (= suspend) our ‘natural attitude’ towards our object of investigation. , , , , With a natural attitude, we would take our assumptions about relations between the resident, the patient, the supervisor and the DO situation for granted. In other words, before starting the interviews, we would already have predefined DO situations, for instance, as a teaching event, with particular roles for all the participants. With a phenomenological attitude, by contrast, we are precisely interested in participants' experiences of these relationships between themselves, the other participants and the situation. , , , , Although reflexivity on one's assumptions is common in all qualitative research, in phenomenology, bracketing goes further than that and means suspending theoretical and conceptual ideas that may narrow one's sight of the phenomenon. This is often referred to with Husserl's dictum ‘to the things themselves’. Researchers must, therefore, constantly be aware of both their own natural attitude and the natural attitude of the interviewees. Bracketing was, consequently, equally important during the interviews and the analysis of them. This entailed constantly suspending opinions and theories about DO that arose and bringing them back to what they revealed about how residents experienced DO situations and what were the common structures of this experience. Before starting the interviews, CBTR and SCMvE each wrote an essay on what they thought were important aspects of the experience of being the resident in a DO situation. In these essays, they also reflected on their natural attitude, what they tend to take for granted regarding DO situations, including findings from their previous research on DO. , , They subsequently interviewed one another about these essays and used these reflections as the start of a reflexive diary and further memo writing throughout the interview and analysis period. As one example of what this exercise revealed, it appeared that both researchers were convinced that a junior doctor must learn from a senior doctor, with DO playing a role. However plausible this seems, by deliberately suspending this and other opinions/theories (e.g., as described in the introduction), they tried to become more sensitive in their interviews to see also other aspects of DO situations that contributed to residents' pre‐reflective experience.
Common structures in pre‐reflective experience We investigated what residents experienced in DO situations before they had reflected on these situations: the ‘pre‐reflective experience’. Although investigating this pre‐reflective experience is an unattainable ideal, our goal was to learn what participants' reflections, ideas and opinions revealed about the common structures of this pre‐reflective experience. These common structures are also called regularities, or invariant structures, or essences of the experience. , ,
Open, theory‐free; bracketing We deliberately started this study without a theory on DO situations, for example, in terms of participants' roles, methods or goals. We focused on how the situation in itself occurred to residents. In line with phenomenological principles, this open approach enabled us to see aspects of the phenomenon that would remain unnoticed had we pre‐defined it and narrowed our object of interest. In order to attain this openness, we had to ‘bracket’ (= suspend) our ‘natural attitude’ towards our object of investigation. , , , , With a natural attitude, we would take our assumptions about relations between the resident, the patient, the supervisor and the DO situation for granted. In other words, before starting the interviews, we would already have predefined DO situations, for instance, as a teaching event, with particular roles for all the participants. With a phenomenological attitude, by contrast, we are precisely interested in participants' experiences of these relationships between themselves, the other participants and the situation. , , , , Although reflexivity on one's assumptions is common in all qualitative research, in phenomenology, bracketing goes further than that and means suspending theoretical and conceptual ideas that may narrow one's sight of the phenomenon. This is often referred to with Husserl's dictum ‘to the things themselves’. Researchers must, therefore, constantly be aware of both their own natural attitude and the natural attitude of the interviewees. Bracketing was, consequently, equally important during the interviews and the analysis of them. This entailed constantly suspending opinions and theories about DO that arose and bringing them back to what they revealed about how residents experienced DO situations and what were the common structures of this experience. Before starting the interviews, CBTR and SCMvE each wrote an essay on what they thought were important aspects of the experience of being the resident in a DO situation. In these essays, they also reflected on their natural attitude, what they tend to take for granted regarding DO situations, including findings from their previous research on DO. , , They subsequently interviewed one another about these essays and used these reflections as the start of a reflexive diary and further memo writing throughout the interview and analysis period. As one example of what this exercise revealed, it appeared that both researchers were convinced that a junior doctor must learn from a senior doctor, with DO playing a role. However plausible this seems, by deliberately suspending this and other opinions/theories (e.g., as described in the introduction), they tried to become more sensitive in their interviews to see also other aspects of DO situations that contributed to residents' pre‐reflective experience.
Context We performed our interviews in the western part of the Netherlands. Dutch GP training is a competency‐based, 3‐year training programme; residents spend their first and final years in GP, working under the nearby supervision of one—sometimes two alternating—GP trainers. Residents visit their academic training institute 1 day each week for their day release programme. Supervisors and residents are increasingly encouraged by the training institute to engage in regular bi‐directional DO sessions, taking turns being the doctor or the observer, during patient care. The take‐up of this advice in practice, at the time of our interviews, was growing but still moderate. The authors did not work at the training institute and had no relationship with the residents interviewed.
Data collection In 2021, we sent an email invitation to a total of 30 first‐ and third‐year residents, randomly chosen, to be interviewed about their experiences in DO situations. Those who accepted were interviewed by either SCMvE or CBTR. The interviews took place via video calls because physical encounters were restricted because of the Covid 19 pandemic. The interviews were unstructured in the sense that there were no pre‐fixed questions other than the opening question: ‘Can you tell me about a situation in which your supervisor was present in the room, observing you while you were working with a patient?’ However, our aim of understanding the how of the experience did influence the type of questions that we used: We followed van Manen by deliberately looking for his ‘existential elements of experience’. Van Manen claims that people experience things in their body (e.g., what they feel and what they do), in time (e.g., what happens when and how fast the time goes), in place (e.g., who sits where and position of furniture) and in relationship (e.g., familiarity with the patient and quality of the training relationship). To get to the how of the experience, we asked quite factually what happened in specific DO situations, guided by these existential elements of experience. , The interviews varied in length from 60 to 75 min.
Analysis The interviews were videotaped. Both CBTR and SCMvE first—separately—analysed the video recordings holistically by capturing in one or two phrases what this interview told them about our topic (i.e., ‘sententious phrases’ [van Manen]). They then transcribed and anonymised the interviews, and CBTR analysed these transcriptions in four rounds of coding through Van Manen's different lenses of lived body, lived space, lived time and lived relationship. Using these four lenses made us more sensitive to all these aspects of the experience. It was an important step in the analysis. The aim of this, however, was to gain a more complete picture of the experience, not to describe the experience in four categories. Therefore, in the results section, we will not report on these existential elements but will break down the pre‐reflective experience in recurrent, or common, structures. , , CBTR grouped the codes by interpreting what they seemed to reveal about specific common structures of the pre‐reflective experience (e.g., ‘residents' awareness of the supervisor as an assessor’). He determined these common structures through a process of ‘imaginative variation’. Imaginative variation means asking oneself if the experience would still be the same experience without this structure. If the answer was no, it was a common structure. , CBTR wrote reflexive memos during this process. He then sent all this material to SCMvE who read the transcripts herself, commented on codes, code groups and memos, and added more codes and memos. SCMvE and CBTR discussed their findings during video calls, after each interview. After three and six interviews, PWT joined them in a meeting to review the analyses thus far. A further review of the analyses took place in two meetings with the whole team, including MV, AHB, HEvdH and FS, who commented on examples of codes, code groups and memos and on the system of analysis.
RESULTS We interviewed a total of six residents, five of them in the second half of their first year, and one in her third year of the training. All residents had experience with being observed by their supervisor throughout a whole consultation. These DO sessions were intended to be formative. Most accounts we heard were about these scheduled DO situations, but some were about ad hoc observations when the supervisor was called in for advice during a consultation. We analysed the interviews by interpreting what they revealed about common structures of residents' pre‐reflective experiences. We report on these common structures in the following paragraphs. A first and obvious common structure was that in DO situations residents experienced being in a room with a patient and with a supervisor. Residents experienced verbal and non‐verbal interactions between themselves and the patient and the supervisor, as well as interactions between the supervisor and the patient: R2 So, then I feel that I have to work a bit harder, I'm almost doing like ‘hallo!!’ (waves her hand) […] if the patient keeps talking to the supervisor […] Then I think: I was supposed to do this conversation, but this way I'm not quite succeeding. A second common structure in residents' pre‐reflective experiences was that they experienced being observed by their supervisor while making an impression on both the supervisor and the patient: R6 Well, but yes, you are very conscious of being in training and that, um, the patient forms an opinion of you, and that the supervisor forms an opinion of you […]. Awareness of the impression they made on supervisors could make residents proud of their accomplishments: R1 And then I thought, yeah, […] this is going well, this is going well, this is going well, and I secretly thought like, oh, this is going nicely and I'm glad that my supervisor is here (and sees it). This awareness of being observed could also make residents feel insecure and even handicapped compared with a not‐observed consultation: R6 Well, um, I feel that when I'm being observed I know less often what it is or what I have to do; and, normally, I would think of something, or make something up, but if my supervisor is observing me, I'm afraid that I'll say the wrong things. Feeling insecure and handicapped was most prominent when residents discussed the diagnosis and care plan with the patient: R2 […] concerning the diagnosis and how to handle this, if I am not entirely certain, I can't be very firm in saying we're going to do this […] because perhaps the supervisor will interrupt and say that we're not going to do this at all […] I found that very awkward to have to do. Feeling insecure and/or handicapped could also relate to residents' personal way of interacting with patients: R2 […] that I wonder if my supervisor approves […] that can concern multiple aspects, such as how I communicate with patients, I'm quite approachable and not so formal if possible, and then I hope that she will appreciate that too […]. As another common structure of the experience, residents experienced their observing supervisor as a senior colleague and potential helper. This could lead residents to ask the supervisor's opinion, for the sake of optimal patient care, even if this was to the detriment of the impression they made on their supervisor as an independent worker: R3 Especially the care plan, I want to have that checked at least. I don't want the patient to get less than optimal treatment when the expert, notably, was sitting beside me […] I always have that conflict: this is an observation so I should act as if he wasn't there. But then I consult him anyway […]. Also, residents often experienced their supervisor as the patient's familiar GP. This, too, could make residents engage their supervisor in the conversation: R4 […] I think that the patient likes that (when I discuss things with my supervisor) […] because she sees that her own GP agrees. Another common structure was the residents' experience of the position of the supervisor in the room: R3 Yes, it would have helped if she had sat more to the side, a bit behind me […] Now I realise that she sat right between us […] almost like a mediator […]. R5 Well he is really quite literally someone to lean on, someone who supports me, so if he did not sit behind me but to the side and further away, that would perhaps give me the feeling (of being in charge). Strikingly, despite the disturbances resulting from the presence of the supervisor, residents often did experience the observation situation as an invitation, or assignment, to show how they work independently: R1 […] This was quite a good three‐way conversation (with a patient and his son); my supervisor sat to the side, and he did not intervene, he, uh …, he just observed, and uh… I did it all by myself […]. Trying to work independently, as if they were alone with the patient, could cause many frustrations: R5 When I get bogged down a bit, or lose the overview …, if he were not there I would recover myself, […] but, apparently, I mostly don't manage to recover when my supervisor is present. R1 and that's …, then you're not your best self, you're not functioning optimally […] while you do wish you did, that's a paradox. By contrast, some residents provided accounts of times when they did not experience DO situations as an assignment to show how they work independently; they could also interpret the situation as an opportunity to work and learn together with their supervisor, observing each other, which they valued: R2 […] I was inclined to turn the situation into a collaborative consultation […] I like that, complementing each other […] sparring about what would you do, and um, yeah, I thought that was fun […]. Interestingly, this interpretation of the DO situation mostly arose spontaneously and was not agreed upon in advance with the supervisor. As a last common structure of experience, residents had a pre‐existing relationship with their supervisor based on previous experiences, which influenced how they experienced the DO situation: R2 I could get along very well with this supervisor, we had a trusted relationship, so I didn't mind being observed by him.
DISCUSSION In order to advance our understanding of DO in general, and specifically concerning the ‘observer effect’, ‘authentic behaviour’, Miller's ‘does’ level and the participation of supervisors in DO situations, we investigated regularities in what residents pre‐reflectively experienced in DO situations. Our results illuminate how an observing supervisor substantially changed the experience of residents and their behaviour, compared with unobserved consultations. We will elaborate on this in the following paragraphs. Ladonna and colleagues found that residents reported behaving ‘inauthentically’ under DO, thus not showing how they would work when not observed, that is, independently. , These authors held the observer effect responsible for this, which refers to acting differently when feeling observed and assessed. This observer effect, however, is often regarded as something that can be overcome, by encouraging residents to behave as they would normally do, and by creating better DO conditions. , , Such conditions comprise longitudinal, trusted, training relationships, recurring DO sessions with dedicated time and measures to promote residents' autonomy such as supervisors avoiding contact, including eye contact, with the patient by deliberately sitting to one side. , , , Our results confirm that these precautions may indeed help reduce distracting interactions and make residents feel more at ease. However, feeling more at ease and less distracted is not the same as being able to work ‘authentically’, or independently, as one would when the supervisor is not there. We found that the observer effect that is caused by the presence of the supervisor did not allow for working independently because this effect was much more material than was previously understood: By being in the situation that was observed, the supervisor changed that situation in numerous ways with an inevitable impact on what the resident and patient experienced, felt and did. As one example, we found that residents were tempted to engage their supervisor in the conversation for the sake of optimal patient care and comfort, even if they would not have done so in an unobserved situation. For these residents, when the senior was in the room, it felt unnatural not to make use of their expertise. The familiarity of the patient with the supervisor, often their GP, was an additional reason for residents to engage their supervisor. Our previous study of patients' experiences in DO situations indicated that patients also drew supervisors into the conversation, for the same reasons: the supervisor's seniority and/or familiarity with the patient. We conclude that the observer effect is not just about residents feeling observed and assessed; the presence of a supervising GP changes the situations to be observed in profound ways. Therefore, observing Miller's ‘does’ level, defined as observing how a resident works independently, , seems impossible. We found that residents often struggled with this: They experienced the expectations of the supervisor, or the programme, as needing to show how they work independently, while they simultaneously experienced that this was impossible, and even undesirable in the interests of good patient care. Residents who coped with this by complying with the expectations and trying to work as independently as possible reported many impediments and frustrations. Previous research also highlights that DO often brings about uncomfortable situations and awkwardness for all three participants: residents, patients and supervisors. , , , , , We add to this literature that one of the causes for this may be found in the discrepancy between patients' and residents' needs for the participation of the supervisor on the one hand, and the DO guidelines‐driven supervisors' and residents' attempts to keep the supervisor out of the conversation, on the other. 4.1 Practical implications Although this study was not about assessment, our findings may have implications for WBA. , How do we collect the data we need for assessing our residents' competence? How can we be certain that a resident is becoming an independently competent GP or medical specialist? How does PGME live up to its societal accountability? Must we not assess residents with a certain degree of distance and objectivity? These are common and valid questions in medical education; more so in the CBME era. Our results, however, question the feasibility of this distanced observing of independent competence. As shown, what observers observe is, at least in part, caused by their presence. In the natural sciences, we would speak of artefacts. This helps us see that the wish for an ‘objective’, distanced, judgement of a resident's performance actually reveals a natural scientist, that is, a (post‐) positivist attitude. In social constructivism, however, these artefacts can be valid starting points for a dialogue concerning their meaning for the resident's learning trajectory. This resonates with the literature on feedback, in which the importance of dialogue is increasingly emphasised. , , Our results suggest that this dialogue should start with firmly establishing that what we have seen has no meaning in itself. Being clear about this instead of ambiguous, as was often reflected in our results, may relieve tensions in DO situations. When we translate the above again to Miller's pyramid, we will never see more than the ‘shows how’ level, which makes the ‘does’ level a construction that we build upon what we have observed and what we infer from other sources. Concerning these other sources, a growing body of knowledge supports new complementary ways of assessing the residents' progress in competence, derived from, for example, ethnography and phenomenology. , , As a last practical implication, our results suggest that residents and supervisors could improve their dialogue concerning the purpose of their being in the same room with a patient and how to proceed. Recent research in similar GP training settings confirms that residents and supervisors hardly discuss this. , , An important factor for this dialogue appears to be that DO situations seem to work best for learning when DO is bi‐directional and not foregrounded. , Residents and supervisors should therefore consider using DO situations to work and learn together while observing each other, collecting, sharing and together interpreting observational information along the way. 4.2 Implications for future research Our phenomenological research amongst residents and patients has contributed to the conceptualisation of DO in PGME, highlighting the inevitable participation of supervisors in the situations they observe. In this, a phenomenological investigation of supervisors ' experiences in DO situations is yet an important missing piece. The main contribution of this work to the literature is the new conceptualisation of the observer effect, not just as anxiety‐provoking but as a material alteration of the situation. Further research in other contexts is needed to confirm and/or improve this understanding. Moreover, we need more research on how information obtained from working and learning together sessions can best inform summative assessments of residents. 4.3 Limitations We conducted our research in one Dutch GP training centre, limiting its transferability to other contexts. An important contextual factor to mention is that patients, in GP training, usually know their own GP, who is the resident's supervisor, better than they know the resident. This fact contributed to one of our findings: Residents experienced the presence of their supervisor as the patient's familiar GP. This could encourage them to engage the supervisor in the conversation. The other reason to engage the supervisor, their seniority, will probably apply to education contexts in most health professions. A second limitation is that this is a small interview study in only one context: a GP training centre in the Netherlands. In phenomenological research, however, small numbers of participants often suffice to attain meaningful, though not exhaustive, results. As van Manen puts it: ‘Every phenomenological topic can always be taken up again and explored for dimensions of original meaning and aspects of meaningfulness’. Also, the validity of inductively obtained theory is not determined by its quantitative underpinning per se but by its usefulness in different contexts, which needs to be determined further.
Practical implications Although this study was not about assessment, our findings may have implications for WBA. , How do we collect the data we need for assessing our residents' competence? How can we be certain that a resident is becoming an independently competent GP or medical specialist? How does PGME live up to its societal accountability? Must we not assess residents with a certain degree of distance and objectivity? These are common and valid questions in medical education; more so in the CBME era. Our results, however, question the feasibility of this distanced observing of independent competence. As shown, what observers observe is, at least in part, caused by their presence. In the natural sciences, we would speak of artefacts. This helps us see that the wish for an ‘objective’, distanced, judgement of a resident's performance actually reveals a natural scientist, that is, a (post‐) positivist attitude. In social constructivism, however, these artefacts can be valid starting points for a dialogue concerning their meaning for the resident's learning trajectory. This resonates with the literature on feedback, in which the importance of dialogue is increasingly emphasised. , , Our results suggest that this dialogue should start with firmly establishing that what we have seen has no meaning in itself. Being clear about this instead of ambiguous, as was often reflected in our results, may relieve tensions in DO situations. When we translate the above again to Miller's pyramid, we will never see more than the ‘shows how’ level, which makes the ‘does’ level a construction that we build upon what we have observed and what we infer from other sources. Concerning these other sources, a growing body of knowledge supports new complementary ways of assessing the residents' progress in competence, derived from, for example, ethnography and phenomenology. , , As a last practical implication, our results suggest that residents and supervisors could improve their dialogue concerning the purpose of their being in the same room with a patient and how to proceed. Recent research in similar GP training settings confirms that residents and supervisors hardly discuss this. , , An important factor for this dialogue appears to be that DO situations seem to work best for learning when DO is bi‐directional and not foregrounded. , Residents and supervisors should therefore consider using DO situations to work and learn together while observing each other, collecting, sharing and together interpreting observational information along the way.
Implications for future research Our phenomenological research amongst residents and patients has contributed to the conceptualisation of DO in PGME, highlighting the inevitable participation of supervisors in the situations they observe. In this, a phenomenological investigation of supervisors ' experiences in DO situations is yet an important missing piece. The main contribution of this work to the literature is the new conceptualisation of the observer effect, not just as anxiety‐provoking but as a material alteration of the situation. Further research in other contexts is needed to confirm and/or improve this understanding. Moreover, we need more research on how information obtained from working and learning together sessions can best inform summative assessments of residents.
Limitations We conducted our research in one Dutch GP training centre, limiting its transferability to other contexts. An important contextual factor to mention is that patients, in GP training, usually know their own GP, who is the resident's supervisor, better than they know the resident. This fact contributed to one of our findings: Residents experienced the presence of their supervisor as the patient's familiar GP. This could encourage them to engage the supervisor in the conversation. The other reason to engage the supervisor, their seniority, will probably apply to education contexts in most health professions. A second limitation is that this is a small interview study in only one context: a GP training centre in the Netherlands. In phenomenological research, however, small numbers of participants often suffice to attain meaningful, though not exhaustive, results. As van Manen puts it: ‘Every phenomenological topic can always be taken up again and explored for dimensions of original meaning and aspects of meaningfulness’. Also, the validity of inductively obtained theory is not determined by its quantitative underpinning per se but by its usefulness in different contexts, which needs to be determined further.
CONCLUSION Our results indicate that the ‘observer effect’ is much more material than was previously understood. Consequently, observing residents' ‘authentic’ behaviour at Miller's ‘does’ level, as if the supervisor was not there, seems—theoretically and practically—impossible and a misleading concept: misleading because it invited residents to do the impossible: to work as if the supervisor was not there while he/she was there and substantially changed the situation, with all the reported associated problems and distress; misleading also because it made supervisors try to avoid participating in the situation, thereby potentially neglecting patients' and residents' needs; and misleading, finally, because it made residents and supervisors waste opportunities for educating and learning. Based on our results and previous findings, we suggest that when a resident and a supervisor are together in one room, engaged in patient care, one‐way DO is better replaced by bi‐directional DO in working‐and‐learning‐together sessions.
None.
The study protocol was approved by the ethics review committee of the Netherlands Association for Medical Education (NVMO) NERB file number: 2020.2.7.
Chris B. T. Rietmeijer is the first researcher; he led all steps of the design of the study, data collection, coding, further analysis and interpretation of the data. He wrote all versions for revision and comments by the other authors and processed all comments until the final manuscript. He agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Suzanne C. M. van Esch contributed substantially to the conception and design of the study; she performed half of the interviews, helped analyse and code the video recordings and transcripts and helped in further interpretation of the data. She revised the subsequent versions of the manuscript for important intellectual content. She gave her final approval of the version to be published and agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Annette H. Blankenstein contributed substantially to the conception and design of the study; she helped analyse the transcripts and helped in further interpretation of the data. She revised the subsequent versions of the manuscript for important intellectual content. She gave her final approval of the version to be published and agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Henriëtte E. van der Horst contributed substantially to the conception and design of the study; she helped analyse the transcripts and helped in further interpretation of the data. She revised the subsequent versions of the manuscript for important intellectual content. She gave her final approval of the version to be published and agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Mario Veen contributed substantially to the conception and design of the study; he helped analyse the transcripts and helped in further interpretation of the data. He revised the subsequent versions of the manuscript for important intellectual content. He gave his final approval of the version to be published and agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Fedde Scheele contributed substantially to the conception and design of the study; he helped analyse the transcripts and helped in further interpretation of the data. He revised the subsequent versions of the manuscript for important intellectual content. He gave his final approval of the version to be published and agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Pim W. Teunissen contributed substantially to the conception and design of the study; he helped analyse the transcripts and helped in further interpretation of the data. He revised the subsequent versions of the manuscript for important intellectual content. He gave his final approval of the version to be published and agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
|
Editorial Comment on “Performance of PRAME immunohistochemistry compared with that of c‐Kit, c‐Myc, or cyclin D1 for the diagnosis of acral melanocytic tumors”
|
b3edab02-f0a6-4b9c-b5f4-a4fdb8ac195f
|
10107274
|
Anatomy[mh]
| |
Global psycho‐oncology in low middle‐income countries: Challenges and opportunities
|
4cde36b5-b893-4736-8c42-b52d45adaa84
|
10107342
|
Internal Medicine[mh]
|
BACKGROUND GLOBOCAN has predicted that there will be a 47% rise in cancer incidence by 2040, with an estimated 28.4 million cases by that time. , The largest increase, and therefore the greatest burden of disease, will be seen in transitioning or low Human Development Index (HDI) countries, with this trend exacerbated by globalisation and evolving economies. The infrastructure needed for cancer care and control, including its psychosocial dimensions, is least well developed in these countries, , although there is a large body of global evidence demonstrating the value of psychosocial cancer care. , , This Special Issue of Psycho‐Oncology is intended to highlight research and clinical innovations that have emerged from low and middle‐income countries (LMICs) regarding the psychological, social and cultural aspects of cancer. Such research has not often found a voice in mainstream oncology, despite its connection to the WHO Sustainable Development Goal 3 ‐ Good Health and Well‐being. The generation and dissemination of such evidence are essential steps in ensuring that evidence‐based psychological care is included in clinical practice guidelines and in national cancer control plans and that disparities in psychosocial cancer care that exist across countries and regions are addressed. Clinical practice guidelines have typically been based on high‐level evidence from transitioned and high‐World Development Indicators (WDI) countries, which is often not translatable or relevant to low and medium‐WDI contexts. Platforms that highlight psycho‐oncology research from LMICs can help to ensure that a more relevant evidence base is disseminated and taken into account in future guideline development in these regions.
OVERVIEW OF THE SPECIAL ISSUE CONTENT The Special Issue begins with two important Editorials that argue for the integration of psychosocial cancer care and for a global cancer initiative to promote the implementation of such services in LMICs. An invited Commentary, based on a webinar hosted by the Education Subcommittee of the International Psycho‐Oncology Society (IPOS) Palliative Care Special Interest Group, highlights the impact of Covid‐19 on global palliative care and the consequences for psychosocial care across the world. A number of papers in this Special Issue address barriers and potential solutions to the development of research in LMICs and to the implementation of culturally sensitive psychosocial interventions. Bizri et al. describe challenges in building psycho‐oncology services in Lebanon, and Onyeka et al. found, with a few exceptions, that huge gaps exist in psychosocial care for patients with cancer in sub‐Saharan Africa. Global partnerships can be of value to advance the routine implementation of psychosocial care for patients with cancer and their families. In that regard, Costas‐Muñiz describe an international collaboration to connect clinicians, educators and researchers from Latin‐American and Spanish‐speaking countries who are engaged in psychosocial oncology, behavioural medicine and palliative care. Building a unified voice in advocacy, Kim et al. report on the consensus of more than 1400 professionals in psycho‐oncology throughout the world on the need for new resources to address unmet needs of cancer survivors and family caregivers. Decat Bergerot et al. describe a global breast cancer initiative to improve the comprehensive care of patients with breast cancer in low and middle‐income countries for improving global breast cancer outcomes. The support of global organisations such as the Union for International Cancer Control (UICC), highlighted in the editorial by Johnson and Adams, will be essential in order for such initiatives to achieve their goal. Papers in this Special Issue also draw important attention to the interconnections among mental, physical and social determinants of health. Thaduria et al. report on the association between financial toxicity, employment and well‐being in oral cancer survivors in a sub‐Himalayan city in North India in the era of Covid‐19. In a systematic review of the duration of time from symptom onset to the first consultation with a health professional in breast cancer patients, Petrova et al. demonstrated the role that literacy and education, stigma, low socioeconomic status and social support play in delaying help‐seeking in this population. Ainembabazi et al. have demonstrated the link between perception of risk and seeking of screening in female relatives of patients with breast cancer in Uganda. Psychosocial research included in this Special Issue identifies individual resiliency factors, such as self‐compassion, in patients with cancer in Xi’An, China and pilot studies are reported demonstrating the feasibility of written exposure therapy for posttraumatic stress disorder in Iranian women with breast cancer and of acceptance and commitment therapy for parents of children with a haematological malignancy or a solid tumour. Asuzu et al. report a pilot psychosocial intervention group in Nigeria that shows potential in supporting breast cancer patients. However, the need for greater attention to interventional research in psychosocial oncology was highlighted by Onyeka et al. in a scoping review of the psychosocial aspects of cancer in sub‐Saharan Africa. In a similar vein, based on a survey of cancer care providers in Africa, Lounsbury et al. provide support for the need for culturally grounded communication research and program design. To support this, Costa‐Muniz et al. have provided a step‐by‐step guide to the cultural adaptation process for cancer‐related interventions. Five studies focus on country specific issues within Kenya, Ghana, Uganda, Indonesia and Brazil. All link together to highlight a common thread that psychosocial care is developing in LMICs but remains in need of improved access to resources. The research reported within this Special Issue accentuates that implementation strategies can be developed through multi‐disciplinary and multi‐national collaboration.
CONCLUSIONS AND PRIORITIES FOR FUTURE RESEARCH We hope that this Special Issue provides information and impetus to stimulate partnerships and advocacy to build research, education and clinical services in psychosocial oncology. Building local research infrastructure and generating local evidence will support the inclusion of evidence‐based psychosocial care in national cancer plans and in universal health care, with the engagement of community‐based strategies and resources. These are essential steps to ensure that psychosocial care is considered a fundamental and core component of humane and comprehensive cancer care. We hope this Special Issue of the Journal will play a small part in this process.
The authors declare that there is no conflict.
|
Datasets for the reporting of primary tumour in bone: recommendations from the International Collaboration on Cancer Reporting (
|
242b77e9-5821-4b9b-a0b1-fb806c6b5cab
|
10107487
|
Pathology[mh]
|
Pathology reporting on cancer resection specimens provides information that is essential for individual patient management, used for clinical trials and tissue‐based research, and recorded in cancer registries. Given this central role of pathology data in cancer care and research at both the individual and population levels, standardised and structured pathology reporting is essential to ensure that the relevant information is complete, unambiguous, and delivered in a user‐friendly format. Evaluation of bone tumour biopsies is often perceived as highly challenging by pathologists because of their rarity, the relatively high number of distinct tumour subtypes (which often show overlapping histomorphology), and the requirement for clinical‐radiological correlation to come to an accurate diagnosis. Moreover, surgical resection specimens can be complex to evaluate/process due to the various anatomic locations that may be involved and the necessity for extensive macroscopic evaluation, documentation, and correlation with imaging findings. Thus, for accurate diagnosis of bone tumours a multidisciplinary approach is imperative. It is the responsibility of the clinician or radiologist requesting the pathological examination of a specimen to provide information to the pathologist that will assist subsequent tissue processing, diagnostic evaluation, and final interpretation. The use of a standardised pathology requisition/request forms including a checklist of important clinical information is strongly encouraged to help ensure that these data are provided by submitting clinicians. It is the responsibility of the pathologist to verify that all radiological and clinical information essential to make a diagnosis is available to guarantee that the final diagnosis is made within the appropriate clinical/imaging context. This is often achieved through discussion at a multidisciplinary tumour board meeting. Several worldwide organisations such as the College of American Pathologists (CAP) and the Royal College of Pathologists (RCPath) have independently developed datasets for pathology reporting on bone sarcoma. , , The International Collaboration on Cancer Reporting (ICCR) coordinates the production of evidence‐based international pathology reporting datasets that have a consistent style and contain all the parameters needed to guide patient management. The ICCR is a collaboration of multiple pathology organisations and has alliances with international cancer organisations, including the International Agency for Research on Cancer (IARC), Union for International Cancer Control (UICC), and American Joint Committee on Cancer (AJCC). The ICCR datasets are freely available from the ICCR website ( http://www.iccr‐cancer.org ). Here we report on the development of datasets for the pathology reporting of primary bone sarcomas (both biopsy and resection specimens), discuss the rationale for the inclusion of data items, and propose a consensus position in areas of controversy and where there is limited evidence to assist pathologists in their diagnostic practice. The ICCR has developed a set of standardised operating procedures for the process of dataset development and has also defined the selection process, roles, and responsibilities of the chair, expert panel members, the ICCR Dataset Steering Committee representative(s) on the panel, ICCR Series Champion, and the project manager ( http://www.iccr‐cancer.org/datasets/dataset‐development ). The ICCR Series Champion provided guidance and support to the Chair of the Dataset Authoring Committee (DAC) regarding ICCR standards and ensured harmonisation across the bone and soft‐tissue suite of datasets. An international expert panel consisting of pathologists, an oncologic orthopaedic surgeon, a medical oncologist, and a radiologist was established. Initial draft documents were produced by the Project Manager and chair after assessment of core and noncore data items within existing international datasets for bone sarcomas. These drafts were circulated to the Dataset Authoring Committee (DAC) and individual dataset items were discussed at a coordinated series of teleconferences. Subsequently, an agreed version of the revised datasets was posted for open international consultation on the ICCR website for a period of 2 months. All comments received were subsequently discussed by the DAC and, where there was universal agreement from DAC members, resultant changes were incorporated into the datasets. Final versions were ratified by the ICCR Dataset Steering Committee prior to publication. All ICCR datasets, including these on bone sarcomas, are freely available worldwide at the ICCR website at www.iccr‐cancer.org/datasets . Scope The ICCR has developed two separate datasets for the pathology reporting of biopsy and resection specimens of primary bone tumours. Ewing sarcoma and related round‐cell sarcomas arising in bone are also covered in these datasets. Some types of soft‐tissue sarcoma may on rare occasion arise primarily in bone and should be reported using the primary tumour in bone datasets, rather than the soft‐tissue sarcoma datasets. If biopsies are taken from multiple tumour nodules at different sites, these should be documented separately. Haematologic malignancies and metastatic specimens were excluded from these datasets. Core elements Core elements are those that are essential for the clinical management, staging, or prognosis of the cancer. These elements will either have evidentiary support at Level III‐2 or above (based on prognostic factors in the National Health and Medical Research Council (NHMRC) levels of evidence ). The summation of all core elements is considered the minimum reporting standard. A summary of the core elements for the biopsy and resection datasets is outlined in Tables and , respectively, and each is described in further detail below. Neoadjuvant therapy For resection specimens, information about neoadjuvant treatment is essential for proper interpretation of the microscopic findings and accurate pathological diagnosis. Preoperative radiation and/or other therapy may have a profound effect on the morphology of both the cancer and benign tissue. Knowledge of such prior therapy may help to interpret changes such as necrosis, cellular atypia, and inflammatory infiltrates. For this reason, information regarding any prior therapy is important for the accurate assessment of bone specimens. Different scoring systems are being used and are discussed under ‘Response to neoadjuvant therapy’. For example, the use of denosumab in giant‐cell tumour of bone induces bone formation and reduces the number of multinucleated osteoclast‐like giant cells within the lesion; therefore, this information is crucial for diagnostic interpretation. Also, previous embolisation may cause areas of necrosis. Moreover, neoadjuvant use of many novel therapies (such as tyrosine kinase inhibitors or immunotherapy) may result in histological effects and need to be fully disclosed. Imaging findings Correlation between histologic and radiologic findings is critical in the diagnosis of bone tumours. Ideally, every case should be discussed in a multidisciplinary conference or the pathologist should have at least access to the imaging findings when evaluating a biopsy. This is the main reason imaging findings are considered a core element in bone tumour evaluation. For instance, aggressive features identified radiographically (permeative/moth‐eaten growth, cortical destruction, soft tissue extension, type of periosteal reaction) should be mentioned here, as well as multifocality, evidence of matrix deposition, presence of fluid–fluid levels, etc. For instance, in cartilaginous tumours in the phalanx or in Ollier disease, the distinction between benign and malignant may depend solely on whether there is cortical destruction, which may be impossible to evaluate on biopsy or fragmented curettage specimens alone. Therefore, these diagnoses cannot be made without radiological correlation. The presence of fracture should always be documented, as it may alter the morphological features and, in some instances, simulate aggressive features, such as host bone entrapment. As the histological alterations caused by the fracture change over time, it is important to know the time frame between fracture and biopsy. Finally, certain bone tumours (cartilaginous tumours, vascular tumours) tend to occur multifocally, and this information is also helpful for the pathologist. The histological diagnosis should always be correlated with the radiological diagnosis and one should always be cautious when there is a discrepancy between radiological and histological findings. Multidisciplinary discussion is essential, and repeat biopsy should be considered if differences of opinion are not resolved. It is important to know the exact tumour site within the bone, since the histological differential diagnosis will differ between intramedullary tumours and those arising primarily at the bone surface. Also, some tumours are almost exclusively found in the epiphyseal region (e.g. clear‐cell chondrosarcoma, giant‐cell tumour of bone, chondroblastoma), while others preferentially affect the metaphysis (osteosarcoma) or involve also the diaphysis (Ewing sarcoma, adamantinoma). Moreover, primary soft‐tissue sarcomas may arise adjacent to and even invade bone, while primary bone sarcomas may have an extensive soft‐tissue component. In these cases, radiological information is required to decide whether the tumour originates primarily from bone or soft tissue. It is also important for the pathologist to be aware of the radiological differential diagnosis when evaluating bone resection specimens. The presence of a pathologic fracture may influence histological evaluation and should be documented. Certain bone tumours (cartilaginous tumours, vascular tumours) tend to occur multifocally, and skip metastases can be present. This is important information for the pathologist when working up the resection specimen. Finally, the radiological response evaluation should be recorded after neoadjuvant therapy. Anatomical site For biopsy specimens, the anatomical site should be documented by the radiologist and reported under ‘Imaging findings’. Recording the anatomical site of the tumour is important, since certain bone tumours have predilections to arise in specific bones and not others, and/or there is a strong association between anatomic site and patient outcome. The latter is especially true for cartilaginous tumours; as a consequence, the World Health Organization (WHO) Classification of Tumours, Soft Tissue and Bone Tumours (5th edition, 2020) distinguishes between atypical cartilaginous tumour and chondrosarcoma grade 1, depending on whether the tumour is located in the appendicular or axial skeleton, respectively. When arising at appendicular sites (the long and short tubular bones), these tumours behave in a locally aggressive manner and do not metastasize. Therefore, they can be treated locally and should not be classified as having full malignant potential. The term ‘atypical cartilaginous tumour’ is thus preferred for cartilaginous tumours involving the long and short tubular bones. In contrast, the term ‘chondrosarcoma, grade 1’ is used for histologically similar tumours of the axial skeleton, including the pelvis, scapula, and skull base (flat bones)―reflecting the poorer clinical outcome and the necessity for more aggressive treatment of these tumours at these sites. It should be noted that the definition of axial versus appendicular is not universally accepted; while the 2020 WHO Classification categorizes the scapula and skull base as part of the axial skeleton, the UICC /AJCC TNM 8th editions include these sites with the appendicular skeleton. Here we consider the scapula and skull base to be part of the axial skeleton. Tumour site For biopsy specimens, the exact tumour site within the bone should be documented by the radiologist and reported under ‘Imaging findings’. Tumour laterality For biopsy specimens, laterality should be documented by the radiologist or clinician and is reported under ‘imaging findings’. Tumour laterality is a core element. Tumour dimensions For biopsies, the size of the largest tumour nodule should be documented by the radiologist based on imaging, preferably in three dimensions, as this is important to evaluate the tumour volume; in the dataset this will be reported under ‘imaging findings’. In cases where the radiological tumour dimensions cannot be assessed, such as for discontinuous tumour, it is important to note this and record the volume of tumour if possible. If biopsies are taken from multiple tumour nodules at different sites, these should be documented separately. When reporting the gross evaluation of a bone resection specimen, the pathologist should measure the size of the tumour on the resection specimen in at least its largest linear dimension (core element), but preferably in three dimensions (noncore element), as this is important in estimating tumour volume. Operative procedure This element includes the type and intent of the operative procedure, independent of the final margin assessment by the pathologist. On the rare occasion that lymph nodes are included with the specimen, these should be listed under ‘other’. Metastasectomy specimens can also be listed under ‘other’. Histological tumour type Histologic diagnosis is based on the WHO classification of soft tissue and bone tumours, 5th edition, 2020 (Table ). The diagnosis is usually made on biopsy before resection. In some cases, the biopsy is suboptimally targeted on the area(s) of interest or affected by the surgical process, leaving the pathologist with tissue that can be underrepresentative or misrepresentative of the lesion based on the imaging studies. For some entities, more sophisticated testing (e.g. molecular analysis) may be required to achieve an accurate diagnosis, but the small tissue size, tissue processing issues, or suboptimal targeting of biopsy materials may preclude ancillary diagnostic testing. The pathologist should specify any and all limitations of the tissue sample that prevent achieving an optimal pathologic diagnosis. In addition, comments can be made in case the diagnosis on biopsy is uncertain for reasons other than limitations of the material or when there remains a differential diagnosis. When reporting resection specimens, a comment should be included if the final diagnosis based on the resection specimen is discordant with the previous diagnosis on the biopsy. Histological tumour grade In bone sarcomas, the histotype primarily determines histologic grade (based on the 2020 WHO Classification ), with only very few exceptions. Bone sarcomas in which the grade is determined by histotype are outlined in Table . Microscopic extent of invasion For correlation with imaging findings, histological evidence of permeative growth, cortical invasion, and destruction or soft tissue extension should be recorded when reporting resection specimens. This is facilitated when gross examination is aligned with the radiological imaging. Thus, preferably radiologic images should be available when processing specimens. Response to neoadjuvant therapy The response to preoperative chemotherapy is of prognostic value, especially in Ewing sarcoma and osteosarcoma, and needs to be evaluated in a standardised way when reporting resection specimens. At least one complete central slab of tumour through its largest dimension should be submitted for histological evaluation. Additional sections can be taken from the remaining two hemispheres of the specimen, especially near the periosteum and/or areas of soft tissue extension. The amount of remaining viable tumour cells should be estimated on each histological slide to obtain an average score reflecting the overall percentage of response. Response does not always consist of necrosis; very often, extensive fibrosis and calcification can be seen, which is also considered response. In osteosarcoma, a cutoff of 10% viable tumour cells (or 90% or more response consisting of tumour necrosis, fibrosis, and calcification) is used to indicate a good response. For Ewing sarcoma, the cutoff is less well defined. Albergo et al . (2016) recently showed that a 100% response was optimal to define a good tumour response in Ewing sarcoma. In earlier reports (the Bologna system as well as the van der Woude scoring system ), a good response was defined as the percentage of necrosis of the microscopic tumour mass between 90% and 100%. In the literature, different cutoffs are used to evaluate chemotherapy‐induced necrosis. , , , Margin status Most features relating to the margin status of resection specimens are core (Table ). There is no generally accepted approach for reporting bone tumour margins. If margins are involved, a distinction is often made between microscopic involvement (R1) and resections in which it is evident macroscopically that the tumour has been incompletely resected (R2). In case of negative margins (R0), the minimum that should be documented is the distance of the tumour to the closest margin. Some guidelines recommend that all margins <20 mm should be documented in terms of depth and the tissue comprising each that is <20 mm (e.g. fascia, periosteum, epineurium, vascular sheath). Ancillary studies All immunohistochemical stainings and molecular tests that contributed to the diagnosis should be documented. For instance, for Ewing sarcoma and other round‐cell sarcomas, lymphoma, adamantinoma, and chordoma, these ancillary studies (immunohistochemical and/or molecular) are critical. Noncore elements Noncore elements are those which were unanimously agreed by the committee to be included in the dataset but are not supported by NHMRC level III‐2 evidence. These elements may be clinically important and recommended as good practice but are not yet validated or regularly used in patient management. A summary of the noncore elements for each of the datasets is outlined in Tables and and each is described below. Clinical information For accurate diagnosis of bone tumours, a multidisciplinary approach is imperative. It is the responsibility of the clinician or radiologist requesting the pathological examination of a specimen to provide information to the pathologist that will have an impact on the diagnostic process or affect its interpretation. The use of a standard pathology requisition/request form including a checklist of important clinical information is strongly encouraged to help ensure that this information is provided by the clinicians with the specimen. It is also the responsibility of the pathologist to verify that all radiological and clinical information essential to make a diagnosis is available to guarantee that the final diagnosis is made within the appropriate clinical/imaging context. This is often achieved through discussion at a multidisciplinary tumour board meeting. Biopsy handling Core needle biopsy is often performed under computed tomography (CT) or ultrasound guidance with all imaging studies available for review during the planning and execution of the procedure. Preferably a minimum of three cores are submitted for diagnosis. A frozen section can be performed on a representative selection of cores or the tissue obtained at open biopsy, to evaluate whether the biopsy has yielded adequate tissue for diagnosis. Adequacy may also be determined by cytological rapid on‐site evaluation (ROSE); the advantage of ROSE is that the biopsy core evaluated remains almost entirely intact, preserving tissue for other ancillary testing. Moreover, a provisional diagnosis can sometimes be given, and based on the results the remaining tissue can be triaged for further work‐up. Bone tumours need decalcification when formalin‐fixed and paraffin‐embedded (FFPE), which, depending on the type of decalcification used, may severely hamper the use of ancillary techniques. Decalcification should optimally be performed with solutions that preserve RNA and DNA, or a representative core should be kept frozen or embedded in paraffin without prior decalcification, to allow for molecular testing. Acid‐based decalcification (other than EDTA) should therefore be avoided if frozen tissue is unavailable. Necrosis Necrosis in biopsy specimens where the patient has not received neoadjuvant treatment should be documented, especially if necrosis is abundant, hampering microscopic evaluation of the tumour. Lymphovascular invasion Lymphovascular invasion (LVI) is extremely rare in bone tumours. However, it is important to report if identified in the specimen. Margin status In addition to documentation of involvement of margins (R0, R1, R2, distance of tumour from closest margin and localisation of the closest margin) which are considered core, some additional features of margin status are noncore (Table ). The type of tissue comprising the resection margin could also be recorded (e.g. pseudocapsule, loose fibrous/fibroadipose tissue, bone, skeletal muscle, dense regular connective tissue fascia/aponeurosis/periosteum/vascular sheath/perineurium) since bone and fascia may be more robust marginal tissues than other tissue types. In addition, the distance to the closest osteotomy margin could also be recorded even if it is not the closest margin. Lymph node status Lymph nodes are very rarely submitted or found with bone specimens and it is not necessary to undertake an exhaustive search for nodes in the specimen. Although regional lymph node metastasis is very rare in adult bone sarcomas, its presence has prognostic importance and it is important to report. Coexistent pathology If present, the pathologist should report other abnormalities that are relevant for the diagnosis and any other significant pathologic finding, even if unrelated or not directly relevant. For instance, the presence of precursor lesions for chondrosarcoma, such as multiple enchondromas, osteochondromas, or the presence of synovial chondromatosis, should be documented. Paget disease, osteonecrosis, or bone infarction may be seen in association with a secondary sarcoma. The presence of a pathologic fracture may influence the histological evaluation and should be documented. Other unrelated findings may include vasculitis, infection, coexistent chronic lymphocytic leukaemia (CLL), or incidental/unexpected metastatic carcinoma in the same specimen. Pathological staging It is important that pathologists document the required parameters for tumour staging (according to UICC or AJCC 8th edition Staging Systems) in their reports. Ultimately, the final stage will be determined by the treating physician or by the multidisciplinary team, which will take both the pathological and imaging findings into account. The ICCR has developed two separate datasets for the pathology reporting of biopsy and resection specimens of primary bone tumours. Ewing sarcoma and related round‐cell sarcomas arising in bone are also covered in these datasets. Some types of soft‐tissue sarcoma may on rare occasion arise primarily in bone and should be reported using the primary tumour in bone datasets, rather than the soft‐tissue sarcoma datasets. If biopsies are taken from multiple tumour nodules at different sites, these should be documented separately. Haematologic malignancies and metastatic specimens were excluded from these datasets. Core elements are those that are essential for the clinical management, staging, or prognosis of the cancer. These elements will either have evidentiary support at Level III‐2 or above (based on prognostic factors in the National Health and Medical Research Council (NHMRC) levels of evidence ). The summation of all core elements is considered the minimum reporting standard. A summary of the core elements for the biopsy and resection datasets is outlined in Tables and , respectively, and each is described in further detail below. For resection specimens, information about neoadjuvant treatment is essential for proper interpretation of the microscopic findings and accurate pathological diagnosis. Preoperative radiation and/or other therapy may have a profound effect on the morphology of both the cancer and benign tissue. Knowledge of such prior therapy may help to interpret changes such as necrosis, cellular atypia, and inflammatory infiltrates. For this reason, information regarding any prior therapy is important for the accurate assessment of bone specimens. Different scoring systems are being used and are discussed under ‘Response to neoadjuvant therapy’. For example, the use of denosumab in giant‐cell tumour of bone induces bone formation and reduces the number of multinucleated osteoclast‐like giant cells within the lesion; therefore, this information is crucial for diagnostic interpretation. Also, previous embolisation may cause areas of necrosis. Moreover, neoadjuvant use of many novel therapies (such as tyrosine kinase inhibitors or immunotherapy) may result in histological effects and need to be fully disclosed. Correlation between histologic and radiologic findings is critical in the diagnosis of bone tumours. Ideally, every case should be discussed in a multidisciplinary conference or the pathologist should have at least access to the imaging findings when evaluating a biopsy. This is the main reason imaging findings are considered a core element in bone tumour evaluation. For instance, aggressive features identified radiographically (permeative/moth‐eaten growth, cortical destruction, soft tissue extension, type of periosteal reaction) should be mentioned here, as well as multifocality, evidence of matrix deposition, presence of fluid–fluid levels, etc. For instance, in cartilaginous tumours in the phalanx or in Ollier disease, the distinction between benign and malignant may depend solely on whether there is cortical destruction, which may be impossible to evaluate on biopsy or fragmented curettage specimens alone. Therefore, these diagnoses cannot be made without radiological correlation. The presence of fracture should always be documented, as it may alter the morphological features and, in some instances, simulate aggressive features, such as host bone entrapment. As the histological alterations caused by the fracture change over time, it is important to know the time frame between fracture and biopsy. Finally, certain bone tumours (cartilaginous tumours, vascular tumours) tend to occur multifocally, and this information is also helpful for the pathologist. The histological diagnosis should always be correlated with the radiological diagnosis and one should always be cautious when there is a discrepancy between radiological and histological findings. Multidisciplinary discussion is essential, and repeat biopsy should be considered if differences of opinion are not resolved. It is important to know the exact tumour site within the bone, since the histological differential diagnosis will differ between intramedullary tumours and those arising primarily at the bone surface. Also, some tumours are almost exclusively found in the epiphyseal region (e.g. clear‐cell chondrosarcoma, giant‐cell tumour of bone, chondroblastoma), while others preferentially affect the metaphysis (osteosarcoma) or involve also the diaphysis (Ewing sarcoma, adamantinoma). Moreover, primary soft‐tissue sarcomas may arise adjacent to and even invade bone, while primary bone sarcomas may have an extensive soft‐tissue component. In these cases, radiological information is required to decide whether the tumour originates primarily from bone or soft tissue. It is also important for the pathologist to be aware of the radiological differential diagnosis when evaluating bone resection specimens. The presence of a pathologic fracture may influence histological evaluation and should be documented. Certain bone tumours (cartilaginous tumours, vascular tumours) tend to occur multifocally, and skip metastases can be present. This is important information for the pathologist when working up the resection specimen. Finally, the radiological response evaluation should be recorded after neoadjuvant therapy. For biopsy specimens, the anatomical site should be documented by the radiologist and reported under ‘Imaging findings’. Recording the anatomical site of the tumour is important, since certain bone tumours have predilections to arise in specific bones and not others, and/or there is a strong association between anatomic site and patient outcome. The latter is especially true for cartilaginous tumours; as a consequence, the World Health Organization (WHO) Classification of Tumours, Soft Tissue and Bone Tumours (5th edition, 2020) distinguishes between atypical cartilaginous tumour and chondrosarcoma grade 1, depending on whether the tumour is located in the appendicular or axial skeleton, respectively. When arising at appendicular sites (the long and short tubular bones), these tumours behave in a locally aggressive manner and do not metastasize. Therefore, they can be treated locally and should not be classified as having full malignant potential. The term ‘atypical cartilaginous tumour’ is thus preferred for cartilaginous tumours involving the long and short tubular bones. In contrast, the term ‘chondrosarcoma, grade 1’ is used for histologically similar tumours of the axial skeleton, including the pelvis, scapula, and skull base (flat bones)―reflecting the poorer clinical outcome and the necessity for more aggressive treatment of these tumours at these sites. It should be noted that the definition of axial versus appendicular is not universally accepted; while the 2020 WHO Classification categorizes the scapula and skull base as part of the axial skeleton, the UICC /AJCC TNM 8th editions include these sites with the appendicular skeleton. Here we consider the scapula and skull base to be part of the axial skeleton. For biopsy specimens, the exact tumour site within the bone should be documented by the radiologist and reported under ‘Imaging findings’. For biopsy specimens, laterality should be documented by the radiologist or clinician and is reported under ‘imaging findings’. Tumour laterality is a core element. For biopsies, the size of the largest tumour nodule should be documented by the radiologist based on imaging, preferably in three dimensions, as this is important to evaluate the tumour volume; in the dataset this will be reported under ‘imaging findings’. In cases where the radiological tumour dimensions cannot be assessed, such as for discontinuous tumour, it is important to note this and record the volume of tumour if possible. If biopsies are taken from multiple tumour nodules at different sites, these should be documented separately. When reporting the gross evaluation of a bone resection specimen, the pathologist should measure the size of the tumour on the resection specimen in at least its largest linear dimension (core element), but preferably in three dimensions (noncore element), as this is important in estimating tumour volume. This element includes the type and intent of the operative procedure, independent of the final margin assessment by the pathologist. On the rare occasion that lymph nodes are included with the specimen, these should be listed under ‘other’. Metastasectomy specimens can also be listed under ‘other’. Histologic diagnosis is based on the WHO classification of soft tissue and bone tumours, 5th edition, 2020 (Table ). The diagnosis is usually made on biopsy before resection. In some cases, the biopsy is suboptimally targeted on the area(s) of interest or affected by the surgical process, leaving the pathologist with tissue that can be underrepresentative or misrepresentative of the lesion based on the imaging studies. For some entities, more sophisticated testing (e.g. molecular analysis) may be required to achieve an accurate diagnosis, but the small tissue size, tissue processing issues, or suboptimal targeting of biopsy materials may preclude ancillary diagnostic testing. The pathologist should specify any and all limitations of the tissue sample that prevent achieving an optimal pathologic diagnosis. In addition, comments can be made in case the diagnosis on biopsy is uncertain for reasons other than limitations of the material or when there remains a differential diagnosis. When reporting resection specimens, a comment should be included if the final diagnosis based on the resection specimen is discordant with the previous diagnosis on the biopsy. In bone sarcomas, the histotype primarily determines histologic grade (based on the 2020 WHO Classification ), with only very few exceptions. Bone sarcomas in which the grade is determined by histotype are outlined in Table . For correlation with imaging findings, histological evidence of permeative growth, cortical invasion, and destruction or soft tissue extension should be recorded when reporting resection specimens. This is facilitated when gross examination is aligned with the radiological imaging. Thus, preferably radiologic images should be available when processing specimens. The response to preoperative chemotherapy is of prognostic value, especially in Ewing sarcoma and osteosarcoma, and needs to be evaluated in a standardised way when reporting resection specimens. At least one complete central slab of tumour through its largest dimension should be submitted for histological evaluation. Additional sections can be taken from the remaining two hemispheres of the specimen, especially near the periosteum and/or areas of soft tissue extension. The amount of remaining viable tumour cells should be estimated on each histological slide to obtain an average score reflecting the overall percentage of response. Response does not always consist of necrosis; very often, extensive fibrosis and calcification can be seen, which is also considered response. In osteosarcoma, a cutoff of 10% viable tumour cells (or 90% or more response consisting of tumour necrosis, fibrosis, and calcification) is used to indicate a good response. For Ewing sarcoma, the cutoff is less well defined. Albergo et al . (2016) recently showed that a 100% response was optimal to define a good tumour response in Ewing sarcoma. In earlier reports (the Bologna system as well as the van der Woude scoring system ), a good response was defined as the percentage of necrosis of the microscopic tumour mass between 90% and 100%. In the literature, different cutoffs are used to evaluate chemotherapy‐induced necrosis. , , , Most features relating to the margin status of resection specimens are core (Table ). There is no generally accepted approach for reporting bone tumour margins. If margins are involved, a distinction is often made between microscopic involvement (R1) and resections in which it is evident macroscopically that the tumour has been incompletely resected (R2). In case of negative margins (R0), the minimum that should be documented is the distance of the tumour to the closest margin. Some guidelines recommend that all margins <20 mm should be documented in terms of depth and the tissue comprising each that is <20 mm (e.g. fascia, periosteum, epineurium, vascular sheath). All immunohistochemical stainings and molecular tests that contributed to the diagnosis should be documented. For instance, for Ewing sarcoma and other round‐cell sarcomas, lymphoma, adamantinoma, and chordoma, these ancillary studies (immunohistochemical and/or molecular) are critical. Noncore elements are those which were unanimously agreed by the committee to be included in the dataset but are not supported by NHMRC level III‐2 evidence. These elements may be clinically important and recommended as good practice but are not yet validated or regularly used in patient management. A summary of the noncore elements for each of the datasets is outlined in Tables and and each is described below. For accurate diagnosis of bone tumours, a multidisciplinary approach is imperative. It is the responsibility of the clinician or radiologist requesting the pathological examination of a specimen to provide information to the pathologist that will have an impact on the diagnostic process or affect its interpretation. The use of a standard pathology requisition/request form including a checklist of important clinical information is strongly encouraged to help ensure that this information is provided by the clinicians with the specimen. It is also the responsibility of the pathologist to verify that all radiological and clinical information essential to make a diagnosis is available to guarantee that the final diagnosis is made within the appropriate clinical/imaging context. This is often achieved through discussion at a multidisciplinary tumour board meeting. Core needle biopsy is often performed under computed tomography (CT) or ultrasound guidance with all imaging studies available for review during the planning and execution of the procedure. Preferably a minimum of three cores are submitted for diagnosis. A frozen section can be performed on a representative selection of cores or the tissue obtained at open biopsy, to evaluate whether the biopsy has yielded adequate tissue for diagnosis. Adequacy may also be determined by cytological rapid on‐site evaluation (ROSE); the advantage of ROSE is that the biopsy core evaluated remains almost entirely intact, preserving tissue for other ancillary testing. Moreover, a provisional diagnosis can sometimes be given, and based on the results the remaining tissue can be triaged for further work‐up. Bone tumours need decalcification when formalin‐fixed and paraffin‐embedded (FFPE), which, depending on the type of decalcification used, may severely hamper the use of ancillary techniques. Decalcification should optimally be performed with solutions that preserve RNA and DNA, or a representative core should be kept frozen or embedded in paraffin without prior decalcification, to allow for molecular testing. Acid‐based decalcification (other than EDTA) should therefore be avoided if frozen tissue is unavailable. Necrosis in biopsy specimens where the patient has not received neoadjuvant treatment should be documented, especially if necrosis is abundant, hampering microscopic evaluation of the tumour. Lymphovascular invasion (LVI) is extremely rare in bone tumours. However, it is important to report if identified in the specimen. In addition to documentation of involvement of margins (R0, R1, R2, distance of tumour from closest margin and localisation of the closest margin) which are considered core, some additional features of margin status are noncore (Table ). The type of tissue comprising the resection margin could also be recorded (e.g. pseudocapsule, loose fibrous/fibroadipose tissue, bone, skeletal muscle, dense regular connective tissue fascia/aponeurosis/periosteum/vascular sheath/perineurium) since bone and fascia may be more robust marginal tissues than other tissue types. In addition, the distance to the closest osteotomy margin could also be recorded even if it is not the closest margin. Lymph nodes are very rarely submitted or found with bone specimens and it is not necessary to undertake an exhaustive search for nodes in the specimen. Although regional lymph node metastasis is very rare in adult bone sarcomas, its presence has prognostic importance and it is important to report. If present, the pathologist should report other abnormalities that are relevant for the diagnosis and any other significant pathologic finding, even if unrelated or not directly relevant. For instance, the presence of precursor lesions for chondrosarcoma, such as multiple enchondromas, osteochondromas, or the presence of synovial chondromatosis, should be documented. Paget disease, osteonecrosis, or bone infarction may be seen in association with a secondary sarcoma. The presence of a pathologic fracture may influence the histological evaluation and should be documented. Other unrelated findings may include vasculitis, infection, coexistent chronic lymphocytic leukaemia (CLL), or incidental/unexpected metastatic carcinoma in the same specimen. It is important that pathologists document the required parameters for tumour staging (according to UICC or AJCC 8th edition Staging Systems) in their reports. Ultimately, the final stage will be determined by the treating physician or by the multidisciplinary team, which will take both the pathological and imaging findings into account. Herein the construction and content of datasets for the pathology reporting of biopsy and resection specimens of primary bone sarcomas internationally agreed upon by a multidisciplinary group of bone tumour experts working in tertiary referral centres for bone sarcoma are reported. The current evidence was considered and, where lacking, a panel consensus was reached. Data from the relevant medical literature, including the 5 th edition of the WHO Classification as well as other existing published guidelines, were considered. , , The use of standardised reporting templates varies widely. Some pathologists may not engage a standardised template if it is laborious and time‐consuming, and lacks the flexibility desired for providing a more nuanced description of the differential diagnosis. A survey of pathologists demonstrated that only 44% agreed that standardised reporting facilitates reporting of an accurate diagnosis. However, it is well established that structured pathology reporting ensures a more complete diagnosis and, as a consequence, improved treatment decisions and patient outcomes. , , Moreover, structured standardised reporting will accommodate cancer registries and facilitate future large‐scale artificial intelligence‐based studies. Standardised reporting is essential for machine actionability, i.e, the capacity of computational systems to find, access, interoperate, and reuse data with minimal human intervention. , These so‐called FAIR principles (Findable, Accessible, Interoperable, and Reusable) are especially important for rare cancers such as bone sarcomas, where collaboration in research is often required to achieve significant numbers of patients for meaningful statistical analysis. Worldwide standardised reporting is the first step towards FAIR data registration and stewardship and may enable future distributed machine‐learning approaches for rare bone sarcomas, where local databases are connected across institutions and countries without the necessity for patient data ever to leave the institute of healthcare provision. This will unlock research opportunities that are currently prohibited by differences in registration, noncompatibility of information systems, and privacy and regulatory concerns. The support for standardised reporting can be improved when it is supported by all multidisciplinary team members, when compatibility with other information systems is assured, and when incorporated in speech recognition systems. In conclusion, we propose here two international datasets for standardised reporting in bone sarcoma care to improve the diagnosis, treatment, and outcome for these patients and to facilitate future machine‐based learning approaches for these rare sarcomas. This research did not receive any specific grant from funding agencies in the public, commercial, or not‐for‐profit sectors. The authors report no relevant conflicts of interest. JVMGB wrote the initial draft with final review and revision by FW, FA, DB, JAB, JLB, JC, EdA, APDT, KJ, AM, GN, AR, AW, AY, and CF.
|
Dermatology mycology diagnostics in Ireland: National deficits identified in 2022 that are relevant internationally
|
74f7b2de-6b84-41f7-983f-e9fbe855fecf
|
10107536
|
Microbiology[mh]
|
INTRODUCTION Dermatophyte infections are among the most common global diseases, affecting 25% of the world's population, with asymptomatic carriage in 30%–70% of adults. Moreover, in the last two decades, there has been a dramatic increase in their incidence, due to a range of factors including socioeconomic problems, international travel, immigration from tropical countries and contact with animals, particularly pets. The clinical features of dermatophytosis may be mistaken for a wide range of other dermatological diseases including bacterial folliculitis, psoriasis and eczema. Many localised uncomplicated fungal skin infections in healthy individuals can be treated effectively by community pharmacists and general practitioners, however, access to accurate pathogen identification is important in moderate to severe disease, complicated or recalcitrant disease; in order to direct treatment appropriately. For example, in tinea capitis, treatment is often commenced based on clinical diagnosis; however, the choice of oral antifungal agent is dependent on the suspected species and subsequent pathogen identification; also, guidelines recommend that the definitive end point for adequate treatment must be the mycological cure, rather than clinical response. Infections with anthropophilic species such as Trichophyton violaceum or Trichophyton soudanense have shown a good response to terbinafine, yet zoophilic pathogens such as Microsporum canis have better cure rates with the use of griseofulvin or itraconazole. , Historically there has been a preponderance of zoophilic dermatophytes in our region, but a recent epidemiological study demonstrated a shift in prevalence to predominantly anthropophilic species over a 20‐year period. In the latter period of this study, mycology testing of skin, hair and nail samples was outsourced, and access was curtailed for patients in primary care settings. Conventional mycological diagnostic methods are time‐consuming, and when faced with staff shortages in 2016, mycology testing was outsourced to a referral laboratory. Thereafter, requests for fungal testing of skin, hair and nail samples were restricted to consultant dermatologists and other practitioners with specialist training, reducing the number of tests performed. The shortage of medical laboratory scientists is neither a recent phenomenon nor is it simply a local problem for our laboratory; calls to action to address the shortage began in the 1980s, , and even prior to the COVID‐19 pandemic it was recognised that the number of new medical laboratorians entering the workforce was not keeping up with future demand. In a National survey of the United States of America in 2018, vacancy rates in laboratories were ‘considerably higher’ than a similar survey in 2016, and Microbiology Departments were amongst the worst affected with vacancy rates over 10%. At the time of writing, in our Microbiology Department we have a vacancy rate of 21%, and this has been an on‐going issue for many years. In the scientific literature, there is an abundance of guidance , , , and recommendations , , , , , , , , , , , for the diagnosis of dermatomycoses and onychomycoses. However, little is known of the degree to which laboratories have adopted new technologies such as molecular identification tests and antifungal susceptibility testing of dermatophytes. Ireland has no national mycology reference laboratory and fungal skin, hair or nail infections are not notifiable diseases, so there is no oversight or co‐ordinated approaches to diagnosis and surveillance of these pathogens or their susceptibility to anti‐fungal agents. The aim of this study is to perform an evaluation of the dermatological mycology diagnostic service of our hospital and the other hospitals of Ireland, in comparison to similar services internationally, and recognised best practice.
METHODS 2.1 Ethics statement This study was approved by the Research Ethics Committee of University Limerick Hospital Group, Limerick, Ireland. 2.2 Setting The Department of Clinical Microbiology at University Hospital Limerick (UHL) provides a centralised microbiology service for six acute hospital sites of the region's hospital group, University of Limerick Hospitals' Group (ULHG). This service is provided to public and private healthcare facilities in the region including general practice, for a population of circa 400,000 people. Of note, there are no electronic patient records in this group of hospitals. Previous related research from our institution includes fungal bloodstream infections in our ICU patients, an epidemiological analysis of dermatomycoses and onychomycoses in our region over a period spanning 20 years, and several reports of multi‐resistant organisms detected in our hospitals, many of which resulted in outbreaks. , 2.3 Data and Analysis All mycology laboratory test counts from January 2001 to December 2021 were extracted from the Laboratory Information Management System (LIMS, iLab, Dedalus Healthcare, Italy), to provide an historical context to recent trends in the numbers of tests performed. For the period 2011–2021, a data extract of dermatology clinic attendance figures for the hospital was performed from the patient management system (iPMS, Dedalus Healthcare, Italy). Figures for the five‐year periods prior to and following July 2016 (when the change to testing methodology was implemented) were recorded. Similarly, a keyword search term count was performed of the patient clinical letters database (Filemaker Pro, Claris International) held at the dermatology clinic. The keywords ‘fungal’, ‘tinea’ and ‘onychomycosis’ were searched for in the letters of correspondence sent to general practitioners. The count of letters containing these keywords allowed a crude comparison to be made of the number of patients with these conditions seen in the periods before and after access to diagnostics was restricted. A survey was performed in all of the 28 public hospital Microbiology laboratories of Ireland to determine how many of those laboratories performed in‐house mycology testing of skin, nail and hair samples, and which of them routinely performed polymerase chain reaction (PCR) and/or susceptibility testing of dermatophytes and non‐dermatophyte moulds. The respondents were invited to supply test count data if they wished. This survey took the form of an e‐mail request in January 2022 and subsequent follow‐up of non‐responders. The pharmaceutical suppliers of the main dermatological anti‐fungal agents were contacted by e‐mail in January 2022, with follow‐up e‐mails for non‐responders. The companies were asked for details of the number of their unit sales per product for the Irish state and/or for the Mid‐West region of Ireland, especially data from 2011 to 2021 where possible. The companies were Brown and Burk IR Limited, GlaxoSmithKline Consumer Healthcare (Ireland) Limited, Novartis Ireland Limited, Viatris Global Healthcare T/A Mylan Limited, Johnson & Johnson Limited and Janssen Sciences Ireland. Data were analysed using Microsoft Excel.
Ethics statement This study was approved by the Research Ethics Committee of University Limerick Hospital Group, Limerick, Ireland.
Setting The Department of Clinical Microbiology at University Hospital Limerick (UHL) provides a centralised microbiology service for six acute hospital sites of the region's hospital group, University of Limerick Hospitals' Group (ULHG). This service is provided to public and private healthcare facilities in the region including general practice, for a population of circa 400,000 people. Of note, there are no electronic patient records in this group of hospitals. Previous related research from our institution includes fungal bloodstream infections in our ICU patients, an epidemiological analysis of dermatomycoses and onychomycoses in our region over a period spanning 20 years, and several reports of multi‐resistant organisms detected in our hospitals, many of which resulted in outbreaks. ,
Data and Analysis All mycology laboratory test counts from January 2001 to December 2021 were extracted from the Laboratory Information Management System (LIMS, iLab, Dedalus Healthcare, Italy), to provide an historical context to recent trends in the numbers of tests performed. For the period 2011–2021, a data extract of dermatology clinic attendance figures for the hospital was performed from the patient management system (iPMS, Dedalus Healthcare, Italy). Figures for the five‐year periods prior to and following July 2016 (when the change to testing methodology was implemented) were recorded. Similarly, a keyword search term count was performed of the patient clinical letters database (Filemaker Pro, Claris International) held at the dermatology clinic. The keywords ‘fungal’, ‘tinea’ and ‘onychomycosis’ were searched for in the letters of correspondence sent to general practitioners. The count of letters containing these keywords allowed a crude comparison to be made of the number of patients with these conditions seen in the periods before and after access to diagnostics was restricted. A survey was performed in all of the 28 public hospital Microbiology laboratories of Ireland to determine how many of those laboratories performed in‐house mycology testing of skin, nail and hair samples, and which of them routinely performed polymerase chain reaction (PCR) and/or susceptibility testing of dermatophytes and non‐dermatophyte moulds. The respondents were invited to supply test count data if they wished. This survey took the form of an e‐mail request in January 2022 and subsequent follow‐up of non‐responders. The pharmaceutical suppliers of the main dermatological anti‐fungal agents were contacted by e‐mail in January 2022, with follow‐up e‐mails for non‐responders. The companies were asked for details of the number of their unit sales per product for the Irish state and/or for the Mid‐West region of Ireland, especially data from 2011 to 2021 where possible. The companies were Brown and Burk IR Limited, GlaxoSmithKline Consumer Healthcare (Ireland) Limited, Novartis Ireland Limited, Viatris Global Healthcare T/A Mylan Limited, Johnson & Johnson Limited and Janssen Sciences Ireland. Data were analysed using Microsoft Excel.
RESULTS For the five‐year period 2011–2015, the median number of skin, hair and nail specimens for mycology analysis received in our laboratory from general practitioners (GPs) was 855 specimens per annum. For the corresponding period following the restriction of access to this service (2017–2021), the median test count was 35 specimens per annum (i.e., a 96% reduction). The positivity rate (microscopy and/or culture) of these samples increased from 36.5% to 40% across these two periods. The dermatology clinic of our hospital showed an increase from 54 specimens per annum to 117 specimens per annum (117% increase) for the same two time periods and a reduction in the positivity rate from 30% to 27%. See Figure for a chart of specimen requests per requesting location. Total dermatology clinic attendance figures showed a similar increase over the two time periods. The median annual attendance for the clinic in the pre‐curtailment period was 2320 and the corresponding figure post‐curtailment was 4570 attendances (97% increase). This increase was weighted more heavily in favour of paediatric patients (140% increase) rather than adult patients (94% increase). See Figure for a chart of annual attendance figures at the clinic. The results for the count of letters from the patient letters database of the dermatology clinic with matches for the specific search terms ‘fungal’, ‘tinea’ and ‘onychomycosis’ also showed an increase. The total number of letters per annum in the pre‐curtailment period was 65 letters (21 ‘fungal’, 39 ‘tinea’ and 5 ‘onychomycosis’), and there were 127 letters per annum (29, 83 and 15, respectively) in the post‐curtailment period – a 95% increase. See Figure for the number of matches for patient letters containing the search terms ‘fungal’, ‘tinea’ and ‘onychomycosis’, as well as the total number of patient letters recorded per annum in the clinic. In January to March 2022, a survey of the Microbiology laboratories of the public health service system (Health Services Executive) hospitals in the Republic of Ireland revealed that 10 of the twenty‐eight laboratories continue to perform in‐house fungal testing of skin, hair and nail samples. See Figure for a chart of the results of this survey. Nine laboratories refer their specimens to laboratories in larger hospitals in their region, often as part of a hub‐and‐spoke service that applies to many of the more specialised microbiology tests. Nine other laboratories refer their samples to a private reference facility for testing. Our laboratory was the only one of the six large (>600 beds) hospitals which did not provide in‐house testing of these samples. Medium‐sized hospitals were defined for this study as those accommodating 300–600 beds, and small hospitals were those with <300 beds. The bed capacity provides only a very rough estimate of the testing throughput of the laboratories; much of the testing workload comes from community healthcare facilities and general practice, which can vary widely for each hospital depending on their location. The laboratories were also asked whether they had curtailed access to their fungal testing of skin, hair and nails, and were invited to supply test count data. Two hospitals supplied 10 years of test count figures, one from the south of the country (‘Hospital B’) and one from the east of the country (‘Hospital C’), neither of which has had to restrict access to mycology diagnostic services, see Figure for details. Some laboratories reported that they did not provide microscopy results for some of their users (usually general practitioners), but access to fungal culture testing was only restricted by two laboratories (including ours). The second laboratory introduced this measure as a result of the surge in workload due to the Covid‐19 testing. In a follow‐up question to the above survey, the respondents were asked whether they had in‐house capability for either susceptibility testing or PCR testing of dermatophytes. One of the respondents had validated a PCR system but had not yet brought it into routine use, and two other hospitals had trials of systems in progress. As such, at the time of the survey there were no hospitals in Ireland with a dermatophyte PCR system available for routine use. None of the respondents had a susceptibility testing system in use, and since there is no national reference lab facility in Ireland, isolates would need to be sent to the United Kingdom for susceptibility testing if required. In February 2022, the following pharmaceutical companies were contacted for sales data (11 years of data if possible) on their dermatological anti‐fungal products, and their responses are included below: Brown and Burk IR Limited (oral griseofulvin): No response. GlaxoSmithKline Consumer Healthcare (Ireland) Limited (topical terbinafine): No data available. Novartis Ireland Limited (oral terbinafine): Data for 2017–2021 supplied. Viatris Global Healthcare T/A Mylan Limited (oral terbinafine): Data for 2018–2021 supplied. Johnson & Johnson (Ireland) Limited (topical miconazole, topical clotrimazole, topical ketoconazole, topical terbinafine): Data for 2017–2021 supplied. Janssen Sciences Ireland (topical miconazole and hydrocortisone): Data for 2017–2021 supplied. No data prior to 2017 were available, but the data provided for the period 2017–2021 (excluding Nailderm tablets) showed a 12.5% increase in product sales. The data for 2018–2021 (including Nailderm tablets) showed an 11% increase in product sales. Figure provides a chart of the volume of sales for each of the above products.
LIMITATIONS Local pharmacy sales data were not available for the study. The Primary Care Reimbursement Service (PCRS) was contacted for antifungal reimbursement claims data. No data were available by the time the study concluded.
DISCUSSION The incidence of fungal skin infections is increasing at an alarming rate worldwide. Increased incidence was demonstrated in our region by surrogate measures that were examined in this study: Anti‐fungal sales data and dermatology clinic records of confirmed or suspected infections both show double‐digit increases in the last 5 years. Furthermore, the twenty‐year records of test requests of skin, hair and nail samples show year‐on‐year increases right up to the point when access to testing was curtailed. It is evident from our patient letter counts (see Figure ) that patients with fungal‐related disorders represent a very small proportion of cases seen at our dermatology clinics, suggesting the main burden of disease and treatment management is in the community setting by general practitioners and pharmacists with curtailed access to appropriate mycological investigations. Reports of outbreaks involving dermatophytes are commonplace in the scientific literature; a PubMed search for ‘tinea’ and ‘outbreak’ for 2012–2021 provides 767 results. Tinea unguium or onychomycosis was the most common body site mentioned in the study title (50.3% of those with a site stated in the title, 172/342), followed by tinea capitis (27.8%, n = 95), tinea versicolor/corporis (7.9%, n = 27) tinea pedis (7.3%, n = 25) and tinea faciei (2.3%, n = 8). Where a geographical region is mentioned in the title ( n = 423), regions in Asia were the most common (24.8%, n = 105), followed by Africa (24.3%, n = 103), Europe (20.8%, n = 88), the Middle East (13/7%, n = 58), North and South America (4.7% and 10.2%, respectively) and Oceania (1.4%, n = 6). No recent reports of outbreaks are available from Ireland, although a study from Dublin in 2006 described a disproportionate (85.5%) number of patients of African extraction among their paediatric tinea capitis patients. Fungal outbreaks are not unknown on this island however; in 1948, a cluster of 368 tinea capitis cases were detected. Despite this, dermatophyte infections are not listed as a notifiable disease in this country, so there is no obligation to report them. A considerable shift in the epidemiology of dermatophytes has been demonstrated in our region in the last 20 years, with an increasing proportion of anthropophilic species detected from both skin or hair samples and from nail samples, and this has been mirrored in many other countries. , , , The migration of people, children in particular, during wartime has been linked with an increase in dermatomycoses. This has been reported in the former Yugoslavia during the war that took place there in the 1990s, and was previously reported after the second world war, when dermatomycoses spread epidemically. At the time of writing, more than 14 million people have fled Ukraine due to the war taking place there, many of them women and children. It is important now more than ever that dermatomycoses are monitored and identified to prevent large outbreaks from occurring. The results of this study demonstrated a dramatic reduction in the processing of specimens for fungal analysis from GPs after curtailment of mycology diagnostic services. The corresponding increase in dermatology clinic samples did not fill the gap left by this drop in community specimens, which could be explained in part by patients being treated for fungal infection without appropriate diagnostic confirmation, or being left untreated because of the lack of access to diagnostics. Clinical papers , , and dermatology guidelines unanimously call for laboratory confirmation of fungal infection before oral treatment of onychomycosis is started. Clinical findings, nail disease pattern and mycological investigations are important in the treatment of onychomycosis (fungal nail disease); in particular topical treatments are most effective in superficial onychomycosis but often ineffective in subungual or dystrophic onychomycosis with prolonged courses of systemic antifungals required to eradicate infection. Additionally, diagnosis of onychomycosis can be challenging, with similar clinical features seen in non‐dermatophyte nail infections, and non‐infectious conditions such as psoriasis, chronic trauma, lichen planus and nail bed malignancies. Antifungal medications are known to have multiple potential side effects and drug interactions, so prolonged courses in the absence of dermatophyte confirmation is not advised. Mycological identification not only supports diagnosis, it influences antifungal therapy choice and in select cases provides susceptibility information for recalcitrant infections. , Despite the recommendations for microbiological confirmation, investment in fungal diagnostics in this country has been poor. Just 10 of the twenty‐eight laboratories surveyed had in‐house mycology testing capabilities for skin, hair and nail samples, and none reported access to in‐house PCR or susceptibility testing of dermatophytes. Antifungal resistance has been called a ‘global public health threat’. , This is exemplified in India where terbinafine resistant T . indotineae are highly prevalent, and terbinafine resistance in dermatophytes has also been reported in Iran, Japan, Denmark, Belgium, Finland, Switzerland, Germany, the United States, Canada, Bahrain and Brazil. Terbinafine resistance is especially concerning because alternative therapeutic options to treat dermatophytoses are limited. Antifungal resistance is also probably underestimated, since many countries, including our own, have not been performing susceptibility testing. Susceptibility testing of dermatophytes isolated from recalcitrant infections is imperative, , , , so access to this capability should be a priority for our diagnostic services. This would be most readily achieved by the creation of a mycology reference lab for the country, a resource that no national health service should be without. For such testing, current practice in our institution is the transfer of samples overseas to The Mycology Reference Laboratory in Bristol, United Kingdom. This further compounds costs to the health service and delays timely diagnosis. MALDI‐TOF MS (matrix‐assisted laser desorption/ionisation time‐of‐flight mass spectrometry) instrumentation has been reported to be capable of identifying dermatophytes, , although not yet to the same level of accuracy achieved by conventional methods. The availability of these instruments in most modern microbiology laboratories may mean that in the future the identification of fungal pathogens may not need to be a laborious and subjective methodology, and should make it easier for smaller laboratories to implement mycology testing without the specialised knowledge and experience required to identify fungi visually (macroscopically and microscopically). Nucleic acid amplification tests have replaced many of the conventional diagnostic techniques of the microbiology laboratory, and mycology testing is no exception. The poor sensitivity of microscopy and culture, particularly after the onset of empirical treatment and the long turnaround time for culture results give PCR testing a distinct advantage over traditional methods. There is an abundance of published research available evaluating dermatophyte PCR systems, but consensus has yet to be achieved on their applicability. The earliest publications , described PCR as a supplement to culture for assisting organism identification; later publications give a more prominent role to PCR but still suggest that classical methods ‘are still warranted for training purposes and when encountering specific diagnostic problems’. More recently however we see a publication suggesting that PCR can ‘replace microscopy and culture for routine dermatophyte diagnosis’, but another author regarding the same PCR platform says that direct microscopy ‘remains relevant’ for these specimens. The Netherlands National Healthcare Institute report a higher predictive value for the PCR test over direct microscopy and culture, and they recommend that it should therefore replace traditional diagnostics in routine care. Many in‐house PCR systems have been developed, some even achieving ISO 15189 accreditation, but the most straightforward process for introducing a PCR system is via a ‘CE‐IVD’ marked commercial kit; some commercial kits are available that have not been fully evaluated (‘research use only’), these should not be used for routine diagnosis. There are four ‘CE‐IVD’ marked dermatomycosis PCR platforms from three manufacturers available for use in Ireland currently: ‘Dermagenius® 2.0’ and ‘Dermagenius® 3.0’ (Pathonostics®), ‘EUROArray Dermatomycosis’ (EUROImmun) and ‘Dermatophytes and Other Fungi 12‐Well’ (AusDiagnostics). All four platforms have been described previously. , , , , , , , , See Table for a summary of the dermatological fungal isolates captured by each of these kits, and a full list of targets is available in the Appendix . Details are also available in the Appendix for another kit which is currently marked ‘Research Use Only’: ‘Novaplex tm Dermatophyte Assay’ (Seegene). Other studies have shown that the widespread use of over‐the‐counter antifungals may be promoting resistance, most notably to the azole drugs, which can mean that oropharyngeal, vaginal or even systemic yeast infections may need to be treated with less desirable alternatives such as amphotericin B with possible complications and renal toxicity. The increased use of immunosuppressive therapy means that invasive fungal infections are an emerging problem worldwide, and the incidence of azole resistance is increasing. , Our data show significant use of topical azole creams and powders in this country; over 700,000units purchased by a population of 5 million people in 2021. Dermatological mycology testing has not been prioritised in many laboratories around the world, including our own, yet there is growing international evidence of increased incidence of infections and resistance to anti‐fungal agents. In Ireland, we have a growing population and increasing immigration, yet the testing capacity of our laboratories are being curtailed, susceptibility testing of dermatophytes is not being performed and new technologies have not been adopted. Furthermore, there is suboptimal epidemiological tracking of organisms and their antifungal susceptibilities, and there is no national oversight. Dermatological fungal infections are commonly misconceived as a cosmetic problem, but left untreated they can cause pain, physical impairment, increased risk of infections such as cellulitis and osteomyelitis in immunocompromised or diabetic patients, and a significant negative impact on quality of life. Recently, the WHO published a list of fungal priority pathogens causing systemic invasive infections, in this report, they suggest that future reports will include those causing dermatomycoses, highlighting the economic and health impact of the same. This study serves to highlight the need for improvement of current national practices in dermatological mycology testing, and proposes practical steps toward improving them.
JP: Conceptualisation (equal); Writing–original draft (lead); Data curation (lead); Methodology (lead); Writing–review and editing (equal). EP: Conceptualisation (equal); Writing–original draft (supporting); Writing–review and editing (equal); SR: Writing–original draft (supporting); Writing–review and editing (equal). SF: Conceptualisation (equal); Writing–original draft (supporting); Writing–review and editing (equal). NOC: Conceptualisation (equal); Writing–original draft (supporting); Writing–review and editing (equal). CPD: Conceptualisation (equal); Writing–original draft (supporting); Writing–review and editing (equal).
This study was performed as part of a PhD program for the lead author (JP). Funding for the PhD was provided by University Hospital Limerick and the Corresponding Author at the University of Limerick.
The authors certify that they have no affiliations with or involvement in any organisation or entity with any financial interest, or non‐financial interest in the subject matter or materials discussed in this manuscript that would constitute a conflict of interest.
All the authors have revised the manuscript critically and have approved the final draft.
Appendix S1 Click here for additional data file.
|
The evolving landscape of thoracic surgical oncology
|
32ea40b8-fceb-4539-b2dc-f6e2f98477ef
|
10107667
|
Internal Medicine[mh]
|
INTRODUCTION The rich history of Thoracic Surgery can find its origins with the first described resection of the lung secondary to an infection for worms. Since its inception there have been many iterations of Thoracic Surgery with the early phases of progress driven by infectious etiologies such as tuberculosis. At the time of publication of this Seminar series, it will be just approximately one decade short of a century that the first anatomic resections for thoracic oncology were first performed in the early 1930s. , In actuality, Thoracic oncologic resections were performed earlier but not as the single‐stage procedures of the contemporary version of Thoracic Surgery. The early experience with single‐stage resections were not entirely associated with long‐term survival until Graham and Singer described a single‐stage pneumonectomy in a physician who ultimately outlived Graham. , Surgery for esophageal malignancy has shared a similar story regarding its roots. In 1917, Torek described the resection of a thoracic esophageal malignancy with a cervical esophagostomy that was made continuous with the stomach via an external rubber tube. Since this original operation, esophagectomies evolved to using native viscera as a conduit and eventually to the use of the stomach as described by Ivor Lewis and Mckeown. , , , Almost needless to state, other aspects of Thoracic Surgery such as that found in the surgical treatment of mediastinal, pleural, and chest wall‐based malignancies all have similar origins.
EVOLVING PARADIGMS IN THORACIC SURGICAL ONCOLOGY As with all medical specialities and subspecialties, Thoracic Surgery has enjoyed the benefits of innovations in diagnostic tools and therapeutics since its birth as a discipline. Diagnostic tools have grown from those requiring the surgeon to act as the sole provider to new tools requiring special expertise that compliments the role of the Thoracic Surgeon. In the surgical realm specifically, the current era of Thoracic Surgery has seen a shift in emphasis from the open approaches to the minimally invasive platforms of video‐assisted thoracoscopic surgery (VATS) and robotic surgery. This shift has been widely accepted as a significant milestone in the advancement of patient‐centered care owing to its direct beneficial impact on patients. , , , , , , , These and a myriad of other new technologies and techniques have enhanced the preoperative, intraoperative, perioperative, and postoperative care rendered to the Thoracic Oncology patient. The employment of three‐dimensional technologies for chest wall malignancies is yet another specific example of new technologies that embody this progress. By no means, are the current versions of these approaches, techniques, and technologies the “be all, end all” but they point toward further progress and promise in the treatment of Thoracic Oncologic diseases. Integration of other advances such as that which utilizes machine learning and artificial intelligence to facilitate detection, decision‐making, and operations most certainly will move to the forefront in the near future. Management of patients with Thoracic Oncologic disease processes has also evolved. Developments in technology outside of the surgical arena have also rendered radiation oncologists as formidable partners and, at times, competitors to Thoracic Surgeons in the therapies offered to patients with thoracic malignancies. The original discovery of chemotherapy and its subsequent use in the treatment of thoracic malignancies have proven to be an important adjunct to operations performed for Thoracic Surgery. , , , Chemotherapy eventually progressed to where these systemic agents were administered preoperatively to enhance that which could be offered with surgical therapy. , Now the discovery and use of targeted therapies and immunotherapies in both the preoperative and postoperative settings have become the subject of intense study and has served as the substrate for an exciting era of integrated medicine. , , , , , These developments have forced the true Thoracic Surgical Oncologist to establish a working knowledge regarding the role of molecular profiling and genomic sequencing to stay abreast of what is available for their patients. Additionally, the modern era of medicine and surgery has fostered the much needed multidisciplinary collaboration not only for the more routine thoracic malignancies such as lung and esophageal cancers, but for other malignancies such as thymic epithelial tumors and malignant pleural mesotheliomas. Conceptually speaking, the progress made in therapeutics also has evolved into how Thoracic Oncology appears to have moved on from evaluating the oncologic disease processes in discrete entities that lend themselves to diagnostic and therapeutic algorithms to one in which a malignancy is viewed along a fluid continuum. In years past, patients with early stage disease were considered “cured” following a resection with negative margins. This thinking has now been challenged with new data demonstrating a benefit of the newer adjuvant systemic therapies in earlier stages of disease. A byproduct of these new findings is that the dedicated General Thoracic Surgeon is truly becoming the gatekeeper for patients with certain stages of diseases that would otherwise not be referred to see a medical oncologist. Similarly, stages of disease once deemed too advanced for resection are now being reconsidered for surgical resection owing to the marked responses to newer systemic agents. , This epoch of surgical therapy is in the process of challenging the idea that “definitive” therapies serve as exclusive alternative therapies to surgery. In reality these pathways are finding intersections further downstream as greater experience is gathered with newer systemic therapies. In this manner, Thoracic Surgeons are now being called upon to complete the final curative‐intent therapies in stages of disease that otherwise would not have been considered appropriate in generations past. The development of a modernized definition of oligometastases and consolidative therapy, salvage resections, as well as extended resections for thoracic malignancies certainly signal an embracement and an ongoing acceptance of an important change in thinking. Some progress in Thoracic Oncology may appear to be ascertaining greater clarity on a variety of topics. These include revisiting issues around sublobar resections as well as continuing the education process and debate regarding the optimal way in which to assess lymph nodes for thoracic malignancies. , The original surgical resection of lung cancer was by pneumonectomy, to be replaced by lobectomy in the 1950s. , In the current era of LDCT screening for lung cancer, we are witnessing a renewed interest in pulmonary segmentectomies. In the area of esophageal malignancies and despite the passage of a considerable amount of time since the early esophagectomies, understanding how Thoracic Surgery has grown to understand and improve upon conduits and anastomosis remains a very relevant topic that requires regular updating. Lastly, in the current era, it would be remiss to not highlight the social disparities that exist in the delivery and receipt of care for thoracic malignancies. To know and learn of these is to begin the process of rectifying them.
CONCLUSION It was estimated that just before the exponential rise in the practice of thoracic surgery in the 1950s, medical knowledge doubled every 50 years. It was further estimated that by 2010 this doubling occurred every 3.5 years, and by 2020 it was estimated to double every 73 days. The proverbial explosion of data and information has occurred pointedly in Thoracic Surgery as well and perhaps at a rate that the potential attainable knowledge in Thoracic Surgical Oncology is rising at a steeper rate of ascent. (Figure ) On many fronts, maintaining a working grasp of the pertinent and relevant information can be daunting. In this special seminar issue of the Journal of Surgical Oncology, there are a number of outstanding review articles by world renowned experts that have efficiently recapitulated and present this information in a clear and focused manner that facilitates a more profound understanding of modern Thoracic Surgical Oncology. We are confident that you will find these contributions to be both immensely enjoyable reads and densely‐packed informative monographs that will serve the Thoracic Surgical Oncology community well in the years to come.
Strong contributions of the past are the foundational bedrock of clinical, technical, and scientific advances of the present and the future. Developments in Thoracic Surgical Oncology adhere to this principle with a landscape that continues to evolve. In this Seminars in Surgical Oncology special issue focusing on Thoracic Surgical Oncology, the ongoing progress and the promising innovations are highlighted to facilitate an understanding of the current state of the specialty.
|
First, do no harm. Ethical and legal issues of artificial intelligence and machine learning in veterinary radiology and radiation oncology
|
2829c465-9562-47ab-be2d-2cb71ef1abba
|
10107688
|
Internal Medicine[mh]
|
INTRODUCTION 1.1 Background In order for artificial intelligence (AI) to be trustfully adopted in veterinary medicine, it needs to be lawful, ethical, and robust. As veterinary medicine evolves with new technologies, veterinarians are charged with upholding and adhering to ethical conduct. This is particularly important in the realm of AI because technology innovators may be unfamiliar with, or insensitive to healthcare policy or medical and research ethics. Accordingly, it is the role of the veterinary community (specifically, veterinary radiology and radiation oncology as domain experts) to educate and ensure ethical adoption of AI in veterinary diagnostic imaging and radiation oncology. 1.2 Artificial intelligence in medicine AI can be broadly defined as the design of computer systems to do things that would require intelligence if a human were to perform the same task. Including applications such as voice recognition, pattern recognition and identification, or complex automation. In radiology, AI most frequently refers to computerized image analysis and interpretation. AI is distinct from image processing, which is a method to enhance an image or extract key image features. While both involve the use of a software algorithm, AI is often developed by machine learning (ML), applying an algorithm to large data sets with some knowledge associated with that data. The quality of AI predictions depends on the quality of training image data, knowledge about the training image data, inherent features of the algorithm itself, and consistency of correlations between imaging and biology. Basic algorithms may look at specific features of a CT scan (ie. Pre and post‐contrast HU, size, heterogeneity), but more complex problem‐solving algorithms may have hundreds of layers intended to mimic the neural networks of the human brain. These algorithms can ‘learn’ without human supervision, drawing from data that is unstructured and unlabeled. 1.3 Relevant veterinary medical ethical principles to be applied to AI The American Veterinary Medical Association (AVMA) outlines the principles of ethical conduct in the practice of veterinary medicine. Several of these principles are particularly relevant to the development and implementation of technologies such as AI in diagnostic imaging and are paraphrased below : Veterinarians should be influenced only by the welfare of the patient, needs of the client, and safety of the public. Clinical care shall be provided under the terms of a veterinarian‐client‐patient relationship (VCPR). Veterinarians shall safeguard medical information within the confines of the law. Veterinarians must protect the privacy of clients and must not reveal confidences unless it becomes necessary to protect the health and welfare of other individuals or animals. Medical records are the property of the practice and the practice owner. Information within veterinary medical records is confidential. It must not be released except as required or allowed by law, or by consent of the owner of the patient Without express permission of the practice owner, it is unethical for a veterinarian to remove, copy, or use medical records for personal or professional gain. Veterinarians shall continue to study, apply, and advance scientific knowledge. Veterinarians are the only professionals licensed to diagnose and treat diseases in animals. In addition to the ethical guidelines that veterinarians adhere to, laws and regulations define and limit the scope of when, where, and how veterinarians practice. These guidelines, laws, and regulations help ensure the safety of patients and the public, but AI had not yet emerged when they were conceived. This problem is not unique to medicine. Indeed, regulators are currently struggling to define how to ensure safety and responsibility for many AI tools ranging from self‐driving cars to facial recognition. 1.4 Veterinary ethical principles applied to AI in diagnostic imaging and radiation therapy In consideration of the ethical principles described above, AI should be adopted in veterinary diagnostic imaging and radiation therapy only when it improves the welfare/outcomes of the patients, needs of the client, and/or safety of the patient. It should be applied to clinical care under the terms of a VCPR. The principle of ‘informed consent’ is critical to the clinical use of AI in medicine. It should not be assumed that pet owners understand what AI entails. AI providers should specifically describe how pet health data is obtained, stored, and used. A disclosure should be provided to pet owners stating what personal or personally identifiable information is shared, who has access to this data, and for what purposes. This includes whether the data or products created from their data will be sold or shared with outside parties. One ethical concern involving AI in diagnostic imaging is the ‘black box’ nature of many algorithms. Veterinarians may not be able to understand how or why an AI tool has made a certain determination or recommendation. Knowledge held within an algorithm that cannot be understood or shared with the medical community cannot be reasonably analyzed or reviewed. It may be subject to biases based on the population of previous inputs that might not be identifiable until severe errors or outcomes occur. Algorithms may be constantly learning, and without an understanding of their process, we cannot ensure accuracy and resultant patient safety over the course of its use. Veterinarians should disclose to pet owners when AI is a part of their pet's diagnosis, and their understanding of the diagnosis and the accuracy/reliability of that information. A veterinarian's ability to provide an accurate disclosure will depend on transparency from AI providers. When a misdiagnosis or medical error occurs (such as inappropriate dose delivery in radiation oncology), a root cause analysis should be performed. This systematic process seeks to identify the cause of an adverse event, in order to prevent the same error in the future. If an AI system causes harm, we must be able to understand why. New procedures will be needed to analyze adverse AI outcomes.
Background In order for artificial intelligence (AI) to be trustfully adopted in veterinary medicine, it needs to be lawful, ethical, and robust. As veterinary medicine evolves with new technologies, veterinarians are charged with upholding and adhering to ethical conduct. This is particularly important in the realm of AI because technology innovators may be unfamiliar with, or insensitive to healthcare policy or medical and research ethics. Accordingly, it is the role of the veterinary community (specifically, veterinary radiology and radiation oncology as domain experts) to educate and ensure ethical adoption of AI in veterinary diagnostic imaging and radiation oncology.
Artificial intelligence in medicine AI can be broadly defined as the design of computer systems to do things that would require intelligence if a human were to perform the same task. Including applications such as voice recognition, pattern recognition and identification, or complex automation. In radiology, AI most frequently refers to computerized image analysis and interpretation. AI is distinct from image processing, which is a method to enhance an image or extract key image features. While both involve the use of a software algorithm, AI is often developed by machine learning (ML), applying an algorithm to large data sets with some knowledge associated with that data. The quality of AI predictions depends on the quality of training image data, knowledge about the training image data, inherent features of the algorithm itself, and consistency of correlations between imaging and biology. Basic algorithms may look at specific features of a CT scan (ie. Pre and post‐contrast HU, size, heterogeneity), but more complex problem‐solving algorithms may have hundreds of layers intended to mimic the neural networks of the human brain. These algorithms can ‘learn’ without human supervision, drawing from data that is unstructured and unlabeled.
Relevant veterinary medical ethical principles to be applied to AI The American Veterinary Medical Association (AVMA) outlines the principles of ethical conduct in the practice of veterinary medicine. Several of these principles are particularly relevant to the development and implementation of technologies such as AI in diagnostic imaging and are paraphrased below : Veterinarians should be influenced only by the welfare of the patient, needs of the client, and safety of the public. Clinical care shall be provided under the terms of a veterinarian‐client‐patient relationship (VCPR). Veterinarians shall safeguard medical information within the confines of the law. Veterinarians must protect the privacy of clients and must not reveal confidences unless it becomes necessary to protect the health and welfare of other individuals or animals. Medical records are the property of the practice and the practice owner. Information within veterinary medical records is confidential. It must not be released except as required or allowed by law, or by consent of the owner of the patient Without express permission of the practice owner, it is unethical for a veterinarian to remove, copy, or use medical records for personal or professional gain. Veterinarians shall continue to study, apply, and advance scientific knowledge. Veterinarians are the only professionals licensed to diagnose and treat diseases in animals. In addition to the ethical guidelines that veterinarians adhere to, laws and regulations define and limit the scope of when, where, and how veterinarians practice. These guidelines, laws, and regulations help ensure the safety of patients and the public, but AI had not yet emerged when they were conceived. This problem is not unique to medicine. Indeed, regulators are currently struggling to define how to ensure safety and responsibility for many AI tools ranging from self‐driving cars to facial recognition.
Veterinary ethical principles applied to AI in diagnostic imaging and radiation therapy In consideration of the ethical principles described above, AI should be adopted in veterinary diagnostic imaging and radiation therapy only when it improves the welfare/outcomes of the patients, needs of the client, and/or safety of the patient. It should be applied to clinical care under the terms of a VCPR. The principle of ‘informed consent’ is critical to the clinical use of AI in medicine. It should not be assumed that pet owners understand what AI entails. AI providers should specifically describe how pet health data is obtained, stored, and used. A disclosure should be provided to pet owners stating what personal or personally identifiable information is shared, who has access to this data, and for what purposes. This includes whether the data or products created from their data will be sold or shared with outside parties. One ethical concern involving AI in diagnostic imaging is the ‘black box’ nature of many algorithms. Veterinarians may not be able to understand how or why an AI tool has made a certain determination or recommendation. Knowledge held within an algorithm that cannot be understood or shared with the medical community cannot be reasonably analyzed or reviewed. It may be subject to biases based on the population of previous inputs that might not be identifiable until severe errors or outcomes occur. Algorithms may be constantly learning, and without an understanding of their process, we cannot ensure accuracy and resultant patient safety over the course of its use. Veterinarians should disclose to pet owners when AI is a part of their pet's diagnosis, and their understanding of the diagnosis and the accuracy/reliability of that information. A veterinarian's ability to provide an accurate disclosure will depend on transparency from AI providers. When a misdiagnosis or medical error occurs (such as inappropriate dose delivery in radiation oncology), a root cause analysis should be performed. This systematic process seeks to identify the cause of an adverse event, in order to prevent the same error in the future. If an AI system causes harm, we must be able to understand why. New procedures will be needed to analyze adverse AI outcomes.
ETHICAL ADOPTION OF AI 2.1 Guiding principles As with any new and disruptive technology, AI has the potential to change how we practice veterinary medicine. This will happen in some ways we can predict, and likely in many ways we cannot. As we usher in this novel technology, it is incumbent on us as a profession to establish guiding principles to safeguard our animal patients, their human owners, and our veterinary colleagues. It is useful to remind ourselves of the veterinarian's oath: “Being admitted to the profession of veterinary medicine, I solemnly swear to use my scientific knowledge and skills for the benefit of society through the protection of animal health and welfare, the prevention and relief of animal suffering, the conservation of animal resources, the promotion of public health, and the advancement of medical knowledge.” I will practice my profession conscientiously, with dignity, and in keeping with the principles of veterinary medical ethics. “I accept as a lifelong obligation the continual improvement of my professional knowledge and competence. ” Within this oath that all veterinarians take prior to entering into practice are a few notable principles directly applicable to AI. We are charged to use our skills and knowledge to balance sometimes‐opposing needs of reducing animal suffering and improving welfare with the advancement of medical knowledge. Even without AI, we are well aware that these facets of our oath can sometimes be at odds. This is why the oath requires our actions and practice be guided by ethics, so that scientific advancements are not instituted blindly. In research, advancing medical knowledge sometimes creates suffering or reduction of health of an individual animal for the benefit of many animals. These decisions are made with appropriate review processes in place such as institutional animal care and use committees (I.A.C.U.C.), which serve to ensure that investigations are performed in such a way to limit animal numbers (and resultant suffering), have appropriate study design to address the question at hand, and create defensible downstream data which we as veterinarian's can employ to improve our practice. As we are also charged with lifelong learning and increasing our knowledge and competence, we must learn to embrace developing technologies while safeguarding our patients. Alongside the veterinarian's oath, we are guided by the chief principle of the Hippocratic oath, primum non nocere , or first, do no harm. In order to adhere to this tenet, we must have a strong understanding of the consequences of how we practice, and the effects of the technologies we employ. If we do not know or cannot assess the accuracy of an AI product, we have no way to gauge the potential harm of its use. It is therefore obligatory for us as veterinarians to ensure we know both the potential upsides (increased efficiency, increased accuracy, error reduction) and downsides (misdiagnosis, incomplete diagnosis, patient harm) of AI as we consider when and how it should be employed in veterinary practice. To quote Potter Stewart, “ Ethics is knowing the difference between what you have a right to do and what is right to do .” In veterinary medicine we have the ability to perform euthanasia and the responsibility to determine when this is ethically indicated. When releasing a product or deciding upon using it in clinical decision‐making, we should consider what could be termed the “Euthanasia Principle”. Namely, is the decision or recommendation of an AI product defensible and validated to a sufficient degree so as to be ethical when that decision warrants euthanasia. It is incumbent on us as a profession, AI developers, and end‐users of this technology to inject a conscience and ethics in a product that inherently has none. In the case of veterinary radiologists and radiation oncologists, the best way to have this level of understanding of AI is to be “in the loop” from development to deployment. , Direct involvement of a human domain expert serves to guide what parts of imaging and therapy practice would be best served by the addition of AI, aides in curation and assessment of the data sets to create those algorithms, and expert oversight and assessment of their performance. Our unique skillsets and knowledge should be leveraged to maximize AI, and AI used to magnify those skillsets, rather than seek to create human obsolescence. Our expertise lies beyond simply seeing a lesion or making a radiation target, but in our clinical decision making and synthesizing the totality of a case in conjunction with imaging in order to provide the best patient care. Narrow use AI (AI for a specific limited task) will be first to arrive, but general use AI (AI behaving as human intelligence) may also become a reality. Even for the latter, a human in the loop will remain necessary to provide oversight and conscience/ethics. Until an AI system is created and proven to be 100% accurate in all patients and settings (unlikely, but not impossible), a human in the loop is necessary to intervene when errors/failures occur (ideally prior to any patient detriment). A recent charter endorsed by corporations such as Microsoft and IBM introduced the idea of “algor‐ethics,” with guiding principles of transparency, inclusion, accountability/responsibility, impartiality, reliability, security, and privacy. When considering AI use that augments or potentially replaces what veterinarians do, we must examine the concepts of standard of care (SOC) and best practice (BP). Patient care is a spectrum, but SOC can be broadly regarded as the minimum level of acceptable care. Legally, veterinary SOC is adapted from prior human cases, prescribed as, “the standard of care required of and practiced by the average reasonably prudent, competent veterinarian in the community.” This is the ordinary level of care, and does not carry the expectation that the average veterinarian will have the highest level of knowledge and skill. A specialist such as a veterinary radiologist or radiation oncologist is held to a higher standard, that of a competent member of their specialty. While BP (sometimes termed gold standard) is not currently feasible in every instance, it is reasonable that a radiologist interpreting images or a radiation oncologist planning and performing radiation therapy can be considered BP. If AI is to be employed in a way that augments or replaces specialist‐level practice as it is currently being marketed, it should be held to this higher BP standard. While current efforts are largely aimed at replacement of human expertise, it stands to reason that employing AI to extend and magnify human expertise should be the target for new BP. 2.2 Ethical development Ethical AI development encompasses everything from data ethics on a granular level to the overall purpose of AI development (improving patient care, profit generation). Transparency of AI products, the companies producing them, the data used to create them, and the systems in place to assess performance, errors, and bias is what will ultimately either engender trust and adoption or mistrust and opposition/hesitancy. SOC in veterinary medicine is not currently radiologist interpretation of all imaging studies, but it is reasonable to posit that review of all imaging studies by a radiologist/domain expert would constitute BP. Radiologist workforce capabilities don't currently allow for this, but this is a reasonable goal for the profession to work towards. In regards to ethical AI development, it stands to reason that a radiologist (or radiation oncologist) in the loop would also constitute BP. , , BP is not only data set review by a domain expert (though this is an important consideration), but also direct involvement of the domain expert in the planning, development, deployment, and subsequent assessment of AI products. The absence of direct domain expert involvement in the development of a product seeking to supplant or augment a domain expert's role seems questionable at best from an ethical standpoint. A domain expert is best suited to determine what questions/applications should be addressed and in what order, provide guidance in data set choice and collation, assessment of performance and errors with safeguarding against the latter, and implementation of the product in a manner to increase access to BP medicine. , The “black box” nature of AI means that we as humans may not know (or be able to comprehend) how an AI product reaches a clinical decision, particularly one that employs deep learning. , This raises the question as to whether a veterinarian is justified to make a clinical decision (e.g. euthanize a patient) based on an algorithm output (e.g. pulmonary nodules identified on a thoracic radiograph indicating metastatic neoplasia) in which the clinician does not know how the algorithm reached the decision. AI can be developed so as to expose how the algorithm or neural network is working and generating an output. This transparency would provide key insight into errors and pitfalls, which could then be used to optimize the system. This directly applies to products that may identify individual diagnoses or imaging signals (as opposed to general AI). If we do not know how diagnoses are reached we inherently cannot understand when the AI could be incorrect when it encounters novel image sets. The “black box” concept could also be applied to the transparency of AI companies, including what validation data, performance data, and error monitoring they disclose (or is even available). Veterinary products do not fall under the same regulatory guidelines as human products (further described below), and companies bringing veterinary AI products to market are not required to report validity and performance data. A clinician may be unable to assess how an algorithm is reaching conclusions in the classic sense of the “black box”, but they may also be unable to assess performance for the clinical question at hand if this information has not been made public or assessed by the AI developer. Another downside of “black box” technology is that it prevents a clinician from learning and augmenting their practice and interpretation based on AI decisions. Insights may be gained from AI that can be used subsequently in non‐automated or “AI‐free” practice. For example, AI may correlate what a human might view as seemingly disparate variables (e.g. Alkaline phosphatase with a lung pattern) to achieve a more accurate diagnosis, which could be leveraged for improved human performance and understanding. It is not sufficient that a product can distinguish normal from pathology, we should strive to know why/how it is distinguishing normal from pathology. This must be balanced while still leveraging the power of deep learning and AI, namely to analyze and create associations the human brain cannot. Ethical considerations of datasets used in AI development include where and how datasets were sourced for algorithm training, and whether there was consent (owner, clinician/facility generating the images) for use. For a product to be considered ethical, it should have been stringently tested and validated. In order for that to occur, robust and properly curated training and validation datasets must be employed. Imaging datasets could range from combinations of images alone, images keyworded to diagnosis, images with domain expert (i.e., radiologist) report, images with consensus opinion of multiple domain experts, and presence or absence of relevant historic, clinical, biochemical, or cytological/histolpathologic information. Different datasets will create fundamentally different AI product performance and outcome. Better or worse datasets will depend on intended use to some regard, but image interpretation and accuracy by domain experts is optimized in conjunction with relevant clinical data, second opinions, and follow‐up. It stands to reason that more complete datasets would be BP for AI development vs. images alone, and engender greatest confidence in use once validated. Transparency of datasets used allows the clinician to understand whether employing that product for clinical decision‐making in real patients is ethical. It should be considered unethical to use the same data set for both training, validation, and testing purposes. If this is not disclosed to end users, a fair appraisal of the product cannot be made. How data sets were preprocessed, labeled, and validated is essential to understanding whether the resultant product is doing what is purported to do, and ultimately whether its clinical use is ethical. Ethical AI development and deployment should include disclosure of performance data, such as accuracy, sensitivity, specificity, positive/negative predictive values, scenarios/diagnoses for which the AI has been applied, and based on what datasets. If no standardized performance has been assessed this should be disclosed. It is important for the end‐user to know whether the algorithm can perform with optimal or poor‐quality imaging quality/positioning. Similarly, what methods (if any) are being employed for ongoing calibration of a released product and what (if any) performance metrics are released should be transparent and available. If no such protocols are in place this should be disclosed. Similarly, peer‐reviewed publications are an important aspect of validity with respect to algorithm performance, as the rigors of peer review apply scrutiny to claims made about an AI product's proposed utility. While lack of these components does not necessitate a product or its use to be unethical, it should raise serious questions as to why a product was released in the absence of generally acceptable scientific validation and monitoring. Bias within datasets and resultant algorithm creation is an important topic in human medicine. Skewed data sets such as those based primarily on white males, may not be widely applicable to diagnosis in a diverse population of other sexes and races . , While we don't have the exact same considerations in veterinary medicine, inherent bias can be present in data sets. This could result from what patient populations (and the financial status of their owners) end up having imaging studies performed, whether imaging occurred at first opinion or referral practices, image quality, and if certain breeds or body types (e.g. chondrodystrophic vs. non‐chondrodystrophic) are over or underrepresented. These factors may not be evident unless the composition of the training and validation sets are known, and errors are being monitored and assessed, particularly when algorithms are not performing as expected. 2.3 Use There are many potential applications of AI in veterinary medicine, and in particular veterinary imaging and radiation therapy. Ethical application of AI is inherently tied to the question of what is BP. Particularly in a landscape without a regulatory framework, how AI is deployed is an ethical question. In the absence of a regulatory framework to safeguard clinicians, patients, and their owners, veterinary medicine becomes reliant on the ethics of the developers releasing products to market. Current products are focused on clinical diagnosis or identification of pathology in images. The advantage of AI is that it is not subject to fatigue or human cognitive errors. However, as they are currently marketed and used they may serve to exacerbate cognitive errors in the humans using their output to make a clinical decision. One such error is automation bias, or the choice to accept a machine‐generated decision regardless of the clinical picture or discordant information. This leads to both errors of omission as well as commission, and is likely to be worse in the absence of a domain expert in the loop. Leveraging AI to increase domain expert access to patients would seem the most ethical path forward, whilst reducing errors such as automation bias. Image assessment is just one way that AI could facilitate a radiologist being a radiologist. Numerous other uses could increase radiologist workflow and efficiency, which has the potential to drive down cost per read and increase what a radiologist can do. These include: Assessment of technique and positioning at the point of care/acquisition to reduce unnecessary image number submission, and provide standardized diagnostic images of areas of interest to the interpreter. Worklist stratification, identifying and flagging STAT vs. non‐STAT cases. Optimization of hanging protocols. Image optimization to increase signal to noise even in suboptimal raw data. Natural language processing to pre‐populate reports in the radiologist's own style. Construction of differential diagnoses based on the radiologist's description or conclusions. Provision of relevant recommendations and related articles. Mitigate intra‐ or inter‐observer variability within or across time points, and at times when errors may be higher due to reader fatigue. These applications do not include image diagnosis per se, but serve to off‐load time‐consuming tasks away from the radiologist, increasing productivity and accuracy, and ultimately allowing a radiologist to focus on employing their expertise. , , , Leveraging AI in this fashion (as opposed to using AI to replace radiologists) is a practical way to increase clinician and patient access to domain experts in a radiologist‐deficient market. This also provides an expert in‐line with the process, who can aid in product/algorithm optimization, error identification, and feedback as to what other applications would help domain experts perform their job better and more efficiently. These concepts are supported by a recent survey of 88 non‐radiologist clinicians across multiple specialties study in human medicine, where respondents were significantly less comfortable acting upon reports generated by AI alone versus a radiologist's report, but had similar comfort between a radiologist's report and an AI/radiologist hybrid report. Nearly 90% of clinicians in this study preferred a hybrid model compared to AI‐generating reports alone. It is worth noting that these responses come from a profession focused on one species, who receive more training (even for a family practitioner), and have more validated products due to the regulatory framework. We should then ask the question, how can it be ethical to employ these products for clinical diagnosis in veterinary medicine, across multiple species, with variable image quality and positioning, in the absence of a domain expert in the loop or similar regulatory safeguards? Identical principles apply to the use of AI in radiation oncology where automated segmentation of organs‐at‐risk and/or tumor/target volumes have the potential to increase speed/efficiency as well as the potential to introduce medical errors that adversely impact treatment. The use of AI in the dose delivery and/or quality assurance processes in radiation oncology have the potential to lead to overdoses or underdoses that could cause harm to patients and may be very difficult to identify, even upon retrospective inspection. Inaccurate dose planning and delivery not only has potential to harm an individual patient (incomplete radiation delivery, delivery to unintended tissues), but could skew patient outcomes when those individual patients are recruited into studies for tumor response.
Guiding principles As with any new and disruptive technology, AI has the potential to change how we practice veterinary medicine. This will happen in some ways we can predict, and likely in many ways we cannot. As we usher in this novel technology, it is incumbent on us as a profession to establish guiding principles to safeguard our animal patients, their human owners, and our veterinary colleagues. It is useful to remind ourselves of the veterinarian's oath: “Being admitted to the profession of veterinary medicine, I solemnly swear to use my scientific knowledge and skills for the benefit of society through the protection of animal health and welfare, the prevention and relief of animal suffering, the conservation of animal resources, the promotion of public health, and the advancement of medical knowledge.” I will practice my profession conscientiously, with dignity, and in keeping with the principles of veterinary medical ethics. “I accept as a lifelong obligation the continual improvement of my professional knowledge and competence. ” Within this oath that all veterinarians take prior to entering into practice are a few notable principles directly applicable to AI. We are charged to use our skills and knowledge to balance sometimes‐opposing needs of reducing animal suffering and improving welfare with the advancement of medical knowledge. Even without AI, we are well aware that these facets of our oath can sometimes be at odds. This is why the oath requires our actions and practice be guided by ethics, so that scientific advancements are not instituted blindly. In research, advancing medical knowledge sometimes creates suffering or reduction of health of an individual animal for the benefit of many animals. These decisions are made with appropriate review processes in place such as institutional animal care and use committees (I.A.C.U.C.), which serve to ensure that investigations are performed in such a way to limit animal numbers (and resultant suffering), have appropriate study design to address the question at hand, and create defensible downstream data which we as veterinarian's can employ to improve our practice. As we are also charged with lifelong learning and increasing our knowledge and competence, we must learn to embrace developing technologies while safeguarding our patients. Alongside the veterinarian's oath, we are guided by the chief principle of the Hippocratic oath, primum non nocere , or first, do no harm. In order to adhere to this tenet, we must have a strong understanding of the consequences of how we practice, and the effects of the technologies we employ. If we do not know or cannot assess the accuracy of an AI product, we have no way to gauge the potential harm of its use. It is therefore obligatory for us as veterinarians to ensure we know both the potential upsides (increased efficiency, increased accuracy, error reduction) and downsides (misdiagnosis, incomplete diagnosis, patient harm) of AI as we consider when and how it should be employed in veterinary practice. To quote Potter Stewart, “ Ethics is knowing the difference between what you have a right to do and what is right to do .” In veterinary medicine we have the ability to perform euthanasia and the responsibility to determine when this is ethically indicated. When releasing a product or deciding upon using it in clinical decision‐making, we should consider what could be termed the “Euthanasia Principle”. Namely, is the decision or recommendation of an AI product defensible and validated to a sufficient degree so as to be ethical when that decision warrants euthanasia. It is incumbent on us as a profession, AI developers, and end‐users of this technology to inject a conscience and ethics in a product that inherently has none. In the case of veterinary radiologists and radiation oncologists, the best way to have this level of understanding of AI is to be “in the loop” from development to deployment. , Direct involvement of a human domain expert serves to guide what parts of imaging and therapy practice would be best served by the addition of AI, aides in curation and assessment of the data sets to create those algorithms, and expert oversight and assessment of their performance. Our unique skillsets and knowledge should be leveraged to maximize AI, and AI used to magnify those skillsets, rather than seek to create human obsolescence. Our expertise lies beyond simply seeing a lesion or making a radiation target, but in our clinical decision making and synthesizing the totality of a case in conjunction with imaging in order to provide the best patient care. Narrow use AI (AI for a specific limited task) will be first to arrive, but general use AI (AI behaving as human intelligence) may also become a reality. Even for the latter, a human in the loop will remain necessary to provide oversight and conscience/ethics. Until an AI system is created and proven to be 100% accurate in all patients and settings (unlikely, but not impossible), a human in the loop is necessary to intervene when errors/failures occur (ideally prior to any patient detriment). A recent charter endorsed by corporations such as Microsoft and IBM introduced the idea of “algor‐ethics,” with guiding principles of transparency, inclusion, accountability/responsibility, impartiality, reliability, security, and privacy. When considering AI use that augments or potentially replaces what veterinarians do, we must examine the concepts of standard of care (SOC) and best practice (BP). Patient care is a spectrum, but SOC can be broadly regarded as the minimum level of acceptable care. Legally, veterinary SOC is adapted from prior human cases, prescribed as, “the standard of care required of and practiced by the average reasonably prudent, competent veterinarian in the community.” This is the ordinary level of care, and does not carry the expectation that the average veterinarian will have the highest level of knowledge and skill. A specialist such as a veterinary radiologist or radiation oncologist is held to a higher standard, that of a competent member of their specialty. While BP (sometimes termed gold standard) is not currently feasible in every instance, it is reasonable that a radiologist interpreting images or a radiation oncologist planning and performing radiation therapy can be considered BP. If AI is to be employed in a way that augments or replaces specialist‐level practice as it is currently being marketed, it should be held to this higher BP standard. While current efforts are largely aimed at replacement of human expertise, it stands to reason that employing AI to extend and magnify human expertise should be the target for new BP.
Ethical development Ethical AI development encompasses everything from data ethics on a granular level to the overall purpose of AI development (improving patient care, profit generation). Transparency of AI products, the companies producing them, the data used to create them, and the systems in place to assess performance, errors, and bias is what will ultimately either engender trust and adoption or mistrust and opposition/hesitancy. SOC in veterinary medicine is not currently radiologist interpretation of all imaging studies, but it is reasonable to posit that review of all imaging studies by a radiologist/domain expert would constitute BP. Radiologist workforce capabilities don't currently allow for this, but this is a reasonable goal for the profession to work towards. In regards to ethical AI development, it stands to reason that a radiologist (or radiation oncologist) in the loop would also constitute BP. , , BP is not only data set review by a domain expert (though this is an important consideration), but also direct involvement of the domain expert in the planning, development, deployment, and subsequent assessment of AI products. The absence of direct domain expert involvement in the development of a product seeking to supplant or augment a domain expert's role seems questionable at best from an ethical standpoint. A domain expert is best suited to determine what questions/applications should be addressed and in what order, provide guidance in data set choice and collation, assessment of performance and errors with safeguarding against the latter, and implementation of the product in a manner to increase access to BP medicine. , The “black box” nature of AI means that we as humans may not know (or be able to comprehend) how an AI product reaches a clinical decision, particularly one that employs deep learning. , This raises the question as to whether a veterinarian is justified to make a clinical decision (e.g. euthanize a patient) based on an algorithm output (e.g. pulmonary nodules identified on a thoracic radiograph indicating metastatic neoplasia) in which the clinician does not know how the algorithm reached the decision. AI can be developed so as to expose how the algorithm or neural network is working and generating an output. This transparency would provide key insight into errors and pitfalls, which could then be used to optimize the system. This directly applies to products that may identify individual diagnoses or imaging signals (as opposed to general AI). If we do not know how diagnoses are reached we inherently cannot understand when the AI could be incorrect when it encounters novel image sets. The “black box” concept could also be applied to the transparency of AI companies, including what validation data, performance data, and error monitoring they disclose (or is even available). Veterinary products do not fall under the same regulatory guidelines as human products (further described below), and companies bringing veterinary AI products to market are not required to report validity and performance data. A clinician may be unable to assess how an algorithm is reaching conclusions in the classic sense of the “black box”, but they may also be unable to assess performance for the clinical question at hand if this information has not been made public or assessed by the AI developer. Another downside of “black box” technology is that it prevents a clinician from learning and augmenting their practice and interpretation based on AI decisions. Insights may be gained from AI that can be used subsequently in non‐automated or “AI‐free” practice. For example, AI may correlate what a human might view as seemingly disparate variables (e.g. Alkaline phosphatase with a lung pattern) to achieve a more accurate diagnosis, which could be leveraged for improved human performance and understanding. It is not sufficient that a product can distinguish normal from pathology, we should strive to know why/how it is distinguishing normal from pathology. This must be balanced while still leveraging the power of deep learning and AI, namely to analyze and create associations the human brain cannot. Ethical considerations of datasets used in AI development include where and how datasets were sourced for algorithm training, and whether there was consent (owner, clinician/facility generating the images) for use. For a product to be considered ethical, it should have been stringently tested and validated. In order for that to occur, robust and properly curated training and validation datasets must be employed. Imaging datasets could range from combinations of images alone, images keyworded to diagnosis, images with domain expert (i.e., radiologist) report, images with consensus opinion of multiple domain experts, and presence or absence of relevant historic, clinical, biochemical, or cytological/histolpathologic information. Different datasets will create fundamentally different AI product performance and outcome. Better or worse datasets will depend on intended use to some regard, but image interpretation and accuracy by domain experts is optimized in conjunction with relevant clinical data, second opinions, and follow‐up. It stands to reason that more complete datasets would be BP for AI development vs. images alone, and engender greatest confidence in use once validated. Transparency of datasets used allows the clinician to understand whether employing that product for clinical decision‐making in real patients is ethical. It should be considered unethical to use the same data set for both training, validation, and testing purposes. If this is not disclosed to end users, a fair appraisal of the product cannot be made. How data sets were preprocessed, labeled, and validated is essential to understanding whether the resultant product is doing what is purported to do, and ultimately whether its clinical use is ethical. Ethical AI development and deployment should include disclosure of performance data, such as accuracy, sensitivity, specificity, positive/negative predictive values, scenarios/diagnoses for which the AI has been applied, and based on what datasets. If no standardized performance has been assessed this should be disclosed. It is important for the end‐user to know whether the algorithm can perform with optimal or poor‐quality imaging quality/positioning. Similarly, what methods (if any) are being employed for ongoing calibration of a released product and what (if any) performance metrics are released should be transparent and available. If no such protocols are in place this should be disclosed. Similarly, peer‐reviewed publications are an important aspect of validity with respect to algorithm performance, as the rigors of peer review apply scrutiny to claims made about an AI product's proposed utility. While lack of these components does not necessitate a product or its use to be unethical, it should raise serious questions as to why a product was released in the absence of generally acceptable scientific validation and monitoring. Bias within datasets and resultant algorithm creation is an important topic in human medicine. Skewed data sets such as those based primarily on white males, may not be widely applicable to diagnosis in a diverse population of other sexes and races . , While we don't have the exact same considerations in veterinary medicine, inherent bias can be present in data sets. This could result from what patient populations (and the financial status of their owners) end up having imaging studies performed, whether imaging occurred at first opinion or referral practices, image quality, and if certain breeds or body types (e.g. chondrodystrophic vs. non‐chondrodystrophic) are over or underrepresented. These factors may not be evident unless the composition of the training and validation sets are known, and errors are being monitored and assessed, particularly when algorithms are not performing as expected.
Use There are many potential applications of AI in veterinary medicine, and in particular veterinary imaging and radiation therapy. Ethical application of AI is inherently tied to the question of what is BP. Particularly in a landscape without a regulatory framework, how AI is deployed is an ethical question. In the absence of a regulatory framework to safeguard clinicians, patients, and their owners, veterinary medicine becomes reliant on the ethics of the developers releasing products to market. Current products are focused on clinical diagnosis or identification of pathology in images. The advantage of AI is that it is not subject to fatigue or human cognitive errors. However, as they are currently marketed and used they may serve to exacerbate cognitive errors in the humans using their output to make a clinical decision. One such error is automation bias, or the choice to accept a machine‐generated decision regardless of the clinical picture or discordant information. This leads to both errors of omission as well as commission, and is likely to be worse in the absence of a domain expert in the loop. Leveraging AI to increase domain expert access to patients would seem the most ethical path forward, whilst reducing errors such as automation bias. Image assessment is just one way that AI could facilitate a radiologist being a radiologist. Numerous other uses could increase radiologist workflow and efficiency, which has the potential to drive down cost per read and increase what a radiologist can do. These include: Assessment of technique and positioning at the point of care/acquisition to reduce unnecessary image number submission, and provide standardized diagnostic images of areas of interest to the interpreter. Worklist stratification, identifying and flagging STAT vs. non‐STAT cases. Optimization of hanging protocols. Image optimization to increase signal to noise even in suboptimal raw data. Natural language processing to pre‐populate reports in the radiologist's own style. Construction of differential diagnoses based on the radiologist's description or conclusions. Provision of relevant recommendations and related articles. Mitigate intra‐ or inter‐observer variability within or across time points, and at times when errors may be higher due to reader fatigue. These applications do not include image diagnosis per se, but serve to off‐load time‐consuming tasks away from the radiologist, increasing productivity and accuracy, and ultimately allowing a radiologist to focus on employing their expertise. , , , Leveraging AI in this fashion (as opposed to using AI to replace radiologists) is a practical way to increase clinician and patient access to domain experts in a radiologist‐deficient market. This also provides an expert in‐line with the process, who can aid in product/algorithm optimization, error identification, and feedback as to what other applications would help domain experts perform their job better and more efficiently. These concepts are supported by a recent survey of 88 non‐radiologist clinicians across multiple specialties study in human medicine, where respondents were significantly less comfortable acting upon reports generated by AI alone versus a radiologist's report, but had similar comfort between a radiologist's report and an AI/radiologist hybrid report. Nearly 90% of clinicians in this study preferred a hybrid model compared to AI‐generating reports alone. It is worth noting that these responses come from a profession focused on one species, who receive more training (even for a family practitioner), and have more validated products due to the regulatory framework. We should then ask the question, how can it be ethical to employ these products for clinical diagnosis in veterinary medicine, across multiple species, with variable image quality and positioning, in the absence of a domain expert in the loop or similar regulatory safeguards? Identical principles apply to the use of AI in radiation oncology where automated segmentation of organs‐at‐risk and/or tumor/target volumes have the potential to increase speed/efficiency as well as the potential to introduce medical errors that adversely impact treatment. The use of AI in the dose delivery and/or quality assurance processes in radiation oncology have the potential to lead to overdoses or underdoses that could cause harm to patients and may be very difficult to identify, even upon retrospective inspection. Inaccurate dose planning and delivery not only has potential to harm an individual patient (incomplete radiation delivery, delivery to unintended tissues), but could skew patient outcomes when those individual patients are recruited into studies for tumor response.
ETHICS OF DATA MANAGEMENT, OWNERSHIP, AND CUSTODIANSHIP 3.1 Sources and ownership of data Per the AVMA veterinary ethical guidelines, medical records are the property of the practice and the practice owner. Although veterinary medicine is not bound to the Health Information Portability and Accountability Act (HIPAA) which applies to human medical data, information within veterinary medical records is confidential. This leads to several important ethical questions. When and how should patient images and data be shared? Is it ethical to sell patient data? Is anonymization of patient data necessary or sufficient to ethically share with a third party? How can informed consent to share patient data for machine learning be obtained when the general public is generally uninformed about AI? If owners consent to sharing of data for machine learning, should this be shared in a centralized repository of data or sold privately to the highest bidder? When patient data is used to train an algorithm, who owns the trained algorithm? 3.2 Consent, anonymization/privacy, and data management It is useful to consider how other types of data are treated with respect to consent, ownership, and use. Under the EU General Data Protection Regulation (GDPR), patients own and control their data and must give explicit consent for data re‐use or sharing. Under these rules, patients must give consent to have imaging studies used to train an AI algorithm. That consent may need to be re‐obtained for each version of the algorithm. Anonymization may be less important in veterinary vs. human medicine, but it should be recognized that real anonymization is more complicated that most people realize. Fully anonymized data sets may be manipulated in ways that allow their source to be identified. DICOM header data typically contains identifiers that include information about a patient, client, and institution. In addition, metadata is information that explains other data without its content but may also convey private information. Given that large data companies control social media platforms and AI applications, it is feasible for an entity to match veterinary patient data to human families, leading to unwanted consequences. Ethical use of patient data demands that those contributing data and AI developers be aware of these risks and take all steps possible to protect privacy such as removing DICOM header data. While in an ideal world, all data and algorithms would be open for the public to examine, there are legitimate issues relating to cybersecurity and to protection of intellectual property and investment. Cybersecurity of data cannot be guaranteed, and a breach could result in unauthorized access to, or disclosure of personal information. Without adequate anonymization of data and informed consent, clients may not know who their data is being shared with and what additional cybersecurity risks they may be exposed to. The likelihood of consequential harm is significantly less in veterinary medicine than human medicine as it is unlikely that release of a pet's medical data would result in discrimination, humiliation, or increased insurance costs. Yet, these are not impossibilities. Any privacy breach violates the duty of veterinary providers to their clients and patients and may result in unintended harm or embarrassment. Most risks associated with misuse of data can be at least partially managed by obtaining appropriate consent. Under the EU's GDPR, patients may give “broad consent” to have their data used for scientific research keeping within recognized ethical standards. 3.3 Data value and ownership Data ownership has important implications when that data is used to generate a profitable AI product or business. It has been argued that the patients whose data are used to develop the tool should share in the resulting profits, but no mechanisms or rules are in place to guide this. There is a significant potential conflict of interests between healthcare providers, veterinary clients/patients, AI developers, and the overarching public interest in open access to data that can improve medical care. An acceptable principle is that a patient can only be considered to have given consent for others to use their data if they have also been informed of the monetary value of that data. , Unfortunately, intellectual property rights for work derived from patient health data is ambiguous. 3.4 Data sharing and custodianship When consent has not been specifically granted for a specific purpose/project, the data custodian (veterinary hospital, radiologist) acts as the gatekeeper for access to that data. In that role, they are in a position to determine what projects can use the data and to some degree how the data is used, in conjunction with I.A.C.U.C. or research ethical review boards. As data becomes an important commodity, to what extent should a data custodian be able to charge for granting a third‐party access to de‐identified patient data? Furthermore, should pet owners share in profits generated from their pet's medical data? Guidelines must be established as to what is BP for data sharing and custodianship.
Sources and ownership of data Per the AVMA veterinary ethical guidelines, medical records are the property of the practice and the practice owner. Although veterinary medicine is not bound to the Health Information Portability and Accountability Act (HIPAA) which applies to human medical data, information within veterinary medical records is confidential. This leads to several important ethical questions. When and how should patient images and data be shared? Is it ethical to sell patient data? Is anonymization of patient data necessary or sufficient to ethically share with a third party? How can informed consent to share patient data for machine learning be obtained when the general public is generally uninformed about AI? If owners consent to sharing of data for machine learning, should this be shared in a centralized repository of data or sold privately to the highest bidder? When patient data is used to train an algorithm, who owns the trained algorithm?
Consent, anonymization/privacy, and data management It is useful to consider how other types of data are treated with respect to consent, ownership, and use. Under the EU General Data Protection Regulation (GDPR), patients own and control their data and must give explicit consent for data re‐use or sharing. Under these rules, patients must give consent to have imaging studies used to train an AI algorithm. That consent may need to be re‐obtained for each version of the algorithm. Anonymization may be less important in veterinary vs. human medicine, but it should be recognized that real anonymization is more complicated that most people realize. Fully anonymized data sets may be manipulated in ways that allow their source to be identified. DICOM header data typically contains identifiers that include information about a patient, client, and institution. In addition, metadata is information that explains other data without its content but may also convey private information. Given that large data companies control social media platforms and AI applications, it is feasible for an entity to match veterinary patient data to human families, leading to unwanted consequences. Ethical use of patient data demands that those contributing data and AI developers be aware of these risks and take all steps possible to protect privacy such as removing DICOM header data. While in an ideal world, all data and algorithms would be open for the public to examine, there are legitimate issues relating to cybersecurity and to protection of intellectual property and investment. Cybersecurity of data cannot be guaranteed, and a breach could result in unauthorized access to, or disclosure of personal information. Without adequate anonymization of data and informed consent, clients may not know who their data is being shared with and what additional cybersecurity risks they may be exposed to. The likelihood of consequential harm is significantly less in veterinary medicine than human medicine as it is unlikely that release of a pet's medical data would result in discrimination, humiliation, or increased insurance costs. Yet, these are not impossibilities. Any privacy breach violates the duty of veterinary providers to their clients and patients and may result in unintended harm or embarrassment. Most risks associated with misuse of data can be at least partially managed by obtaining appropriate consent. Under the EU's GDPR, patients may give “broad consent” to have their data used for scientific research keeping within recognized ethical standards.
Data value and ownership Data ownership has important implications when that data is used to generate a profitable AI product or business. It has been argued that the patients whose data are used to develop the tool should share in the resulting profits, but no mechanisms or rules are in place to guide this. There is a significant potential conflict of interests between healthcare providers, veterinary clients/patients, AI developers, and the overarching public interest in open access to data that can improve medical care. An acceptable principle is that a patient can only be considered to have given consent for others to use their data if they have also been informed of the monetary value of that data. , Unfortunately, intellectual property rights for work derived from patient health data is ambiguous.
Data sharing and custodianship When consent has not been specifically granted for a specific purpose/project, the data custodian (veterinary hospital, radiologist) acts as the gatekeeper for access to that data. In that role, they are in a position to determine what projects can use the data and to some degree how the data is used, in conjunction with I.A.C.U.C. or research ethical review boards. As data becomes an important commodity, to what extent should a data custodian be able to charge for granting a third‐party access to de‐identified patient data? Furthermore, should pet owners share in profits generated from their pet's medical data? Guidelines must be established as to what is BP for data sharing and custodianship.
RESPONSIBILITY AND LIABILITY 4.1 AI within the VCPR and liability When considering the use of AI, we must consider who or what is liable in the case of a negative outcome. Is liability held by the veterinarian using the AI, the hospital who employs it, or the AI developer? In order to approach these questions, we must first define the veterinary‐client‐patient relationship (VCPR) and who holds it. In order for a VCPR to be established, several criteria must be met. A veterinarian must have assumed responsibility for clinical decisions regarding patient health with client consent. That veterinarian must have adequate patient knowledge to initiate a preliminary diagnosis on the patient's condition, including timely examination of the patient and knowledge of how the patient is kept and cared for. The veterinarian should be available for follow‐up assessment or arrange for continued care. The veterinarian oversees treatment, compliance, and outcome. The veterinarian must maintain patient records. In the current setting of AI use, the receiving veterinarian (a non‐radiologist, and often a non‐specialist veterinarian) holds the VCPR, and as a result the liability as they will decide upon diagnostic and treatment choices. According the American Veterinary Medical Association (AVMA) Principles of Veterinary Medical Ethics , “ When appropriate, attending veterinarians are encouraged to seek assistance in the form of consultations and/or referrals. A decision to consult or refer is made jointly by the attending veterinarian and the client. When a private clinical consultation occurs, the attending veterinarian continues to be primarily responsible for the case and maintaining the VCPR. Consultations usually involve the exchange of information or interpretation of test results. However, it may be appropriate or necessary for consultants to examine patients. When advanced or invasive techniques are required to gather information or substantiate diagnoses, attending veterinarians may refer the patients. A new VCPR is established with the veterinarian to whom a case is referred. ” We should then consider exactly what AI decision‐making is within these guidelines. Is this a telemedicine referral with a non‐human colleague? Is this a consult? Is this simply a diagnostic akin to running a blood chemistry? If this is a consult or referral, it would seem that owner consent to use of AI in decision making is required and should be disclosed and sought prior to use. The current paradigm certainly involves the exchange of information and interpretation of imaging by AI, which sounds a lot like a consult. This again begs the question of whether AI does, should, or even can hold the VCPR in these interactions. Also, within the AVMA Principles is, “Referral is the transfer of responsibility of diagnosis and treatment from a referring veterinarian to a receiving veterinarian. ” While this would seemingly not include an AI product or company as constituting referral from the standpoint of AI not being a veterinarian, it could be considered a referral if responsibility of diagnosis and treatment is shifted to the AI. We must also acknowledge that these principles were created at a time when the concept of AI producing a diagnosis was not yet a reality. If AI is performing tasks similar to what a receiving consulting veterinarian would, it begs the question whether a new VCPR is or should be established by the AI/developer. If so, then there should be resultant shifting of liability away from the receiving veterinarian to the AI/developer in instances of their use. This is particularly relevant in the current landscape where the receiving veterinarian holds the VCPR and resultant liability, but may not be provided with appropriate information or have the ability to assess the validity of an AI product nor the diagnoses and recommendations it produces. It becomes increasingly clear that these are issues that need to be addressed proactively, ideally prior to release of these products into the market. The lack of intentional planning on these issues runs the risk of it being brought to task by an index case guided by evolving tort law, with the untoward effect of not only individual patient, client, or veterinarian harm, but stifling of continued use and advancement of these new technologies. Harm from AI could be caused in many ways. An AI product could be non‐contributory to patient diagnosis/care, merely adding to client costs. It could yield a false‐positive diagnosis, leading to subsequent tests or interventions (up to and including surgery or euthanasia). It could lead to false‐negative results, delaying necessary diagnosis and care. It could be applied to inappropriate datasets or populations (e.g. applying an algorithm to a non‐diagnostic image with questionable results). It could be employed incorrectly by a human user. The AI could simply be faulty and generate erroneous decisions. Incorrect results could lead to subsequent faulty data in research projects sampling this data. In human medicine, the Canadian Association of Radiologists have produced a white paper on this topic, with a proposed scheme to address who holds the responsibility/liability in the instance of misdiagnosis or malpractice when AI is employed (Table ). A version of this could certainly be adopted in veterinary medicine. It is important to highlight that in human medicine, the proposed liability scheme is based on radiologists/domain expert as the human in the loop, which is not the current paradigm in veterinary medicine. Despite this being a more robust system in human medicine, when surveyed, non‐radiologist clinicians in human medicine felt that liability for errors when using AI should fall on the hospital (65.9%), the radiologist (54.5%), and the AI developer (44.3%). Only 4.5% felt that liability should be held by the referring physician. If our human colleagues are not comfortable using AI reports alone from products which are more rigorously tested, why should veterinarians (in particular non‐domain experts) feel comfortable using products with no oversight? We must address who/what holds the VCPR, what constitutes a referral, and who holds the resultant liability when AI is used. In particular, so our non‐domain expert colleagues are not left holding the liability bag. We should ask ourselves as a profession, is it ethical to shift liability to general practitioners, emergency veterinarians, or non‐imaging specialists who use a product whose accuracy is not published or otherwise known? Is the average veterinary clinic or hospital (which may be individually or corporate owned) prepared to accept liability of their use? If AI begins to perform similar diagnostic tasks to a veterinarian, shouldn't AI/developers also then be held to the same standards including the guiding principle of “first, do no harm”? 4.2 Regulatory oversight Regulation and oversight, or lack thereof, is one of the key differences when considering the use of AI in veterinary medicine versus human medicine. The regulatory framework in human medicine is actively evolving, as our human medical colleagues navigate how to best employ and safeguard these emerging technologies. In order to understand ethical implications in veterinary medicine, we must consider the current regulatory landscape. The United States Food and Drug Administration (FDA) regulates medical devices. Section 201(h) of U.S. Federal Food, Drug, and Cosmetics Act (FDCA), defines a medical device as, “as instrument, apparatus, implement, machine, contrivance, implant, in vitro agent or other similar or related article…intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals…intended to affect the structure or any function of the body of man or other animals”. AI and similar products have come to be regarded as software as a medical device (SaMD). , However, the 21 st Century Cures Act (Pub. L. No. 114–255) introduced an exemption in section 520(o) of the FDCA for some software. In order to be exempt from software being considered a medical device, it must be used for, “…administrative support…”, “…to serve as electronic patient records and is not intended to interpret or analyze patient records…”, “… transferring, storing, converting formats, or displaying clinical laboratory test or other device data and results…”, and “…for maintaining or encouraging a healthy lifestyle and is unrelated to diagnosis, cure, mitigation, prevention, or treatment of a disease or condition”. Section 520(o)(1)(E) further states that to be exempt software must meet several criteria simultaneously, including, “…not intended to acquire, process, or analyze a medical image or signal from an in vitro diagnostic device or a pattern or signal from a signal acquisition system…”, “…for the purpose of… supporting or providing recommendations to a healthcare professional about prevention, diagnosis, or treatment of a disease or condition”, and is, “…for the purpose of…enabling such healthcare professionals to independently review the basis for such recommendations that such software presents so that is not the intent that such healthcare professionals rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient”. These guidelines clearly support the radiologist in the loop concept, and that AI products used to generate diagnoses would be classified as medical devices (and subject to their regulation). The FDA has gone on to provide clarifications that echo guiding principles in AI development proposed earlier in this manuscript, advising that, “…the software developer should describe the underlying data used to develop the algorithm and should include plain language descriptions of the logic or rationale used by an algorithm to render a recommendation”, and that, “…the sources supporting the recommendation or the sources underlying the basis for the recommendation should be identified and available to the intended user”. Transparency of data sets, avoidance of black box development, and access to development information are clear guidelines from the FDA. In order to foster innovation, the FDA also offers that it, “…does not intend to enforce compliance even if the last criterion is not met (decisions primarily on software recommendations) as long as it is used for non‐serious or non‐critical conditions”. This last statement has led to the creation of a risk stratification framework for the use of AI. Adapted versions of these frameworks are provided in Tables and . Applying these frameworks to veterinary medicine, it is evident that almost every possible decision of current AI products would fall into medium risk (decisions having diagnostic or therapeutic purposes), and many would fall into higher or highest risk. Remembering the “Euthanasia Principle”, patients with AI diagnoses such as “pulmonary nodules‐metastases”, “aggressive bone lesion”, or “mechanical ileus/obstruction” may lead to euthanasia or invasive treatment depending on prognosis and client finances. As many decisions in veterinary medicine must be made at a single visit, there are very few if any AI‐assisted imaging diagnoses that are low risk. Coupled with what is often little to no follow up on patients or their diagnoses, there are less checks and balances to catch an incorrect AI diagnosis vs. human medicine. Applying current FDA guidelines for human AI products, AI as it is currently being employed in veterinary medicine should be considered a medical device/SaMD and subject to appropriate testing, oversight, and regulation. The lack of such regulation is another key difference between human and veterinary medicine. The FDA currently has no requirements for pre‐market approval of medical devices intended for animal use. This means there are no restrictions to bringing an AI product to the veterinary market, and no safeguards to ensure proper testing, accuracy, or performance. This begs some key questions. In the absence of any governmental regulatory framework to safeguard veterinary AI products, can we rely on AI developers alone to properly test and ethically deploy these products? How can an end user properly evaluate AI products for themselves in the absence of necessary data? Is it ethical to test AI products on clinical patients without consent? How and to what extent should these considerations be disclosed to owners when AI products are used in the diagnosis of their non‐human family? How can risk be properly disclosed if this information is not known by the end‐user veterinarian or provided by the AI developer? Barring large‐scale legislative changes to safeguard patients and their owners, it becomes the responsibility of the profession to safeguard our patients, our clients, and ourselves from potentially harmful AI products. This should be led by domain experts, namely the ACVR/ECVDI and their constituency, at least in how it relates to diagnostic imaging and radiation oncology. Guidelines must be developed for what to look for in a validated AI product, how to identify questionable products, how to assess performance, error reporting, and future ethical development. While the ACVR/ECVDI are not regulatory bodies, criteria could be developed which AI developers could meet, similar to AAFCO statements in veterinary nutrition. This would at least provide some assurance that a product has met minimum requirements of what domain experts deem sufficient for preliminary use. Against the backdrop of otherwise absent regulations, it further supports the necessity of the radiologist in the loop when AI products are employed in live patients for the foreseeable future. AI developers with ethical aims should seek involvement and adoption by domain experts as the BP path forward. In human medicine, this is further regulated whether an AI is “locked” once released to market (i.e.; the algorithm does not change over time), versus additional regulatory approvals when substantive changes to the AI are made. The advantage of AI is the ability for it to continuously learn and improve over time. With no oversight how is the end user to know if an AI is performing better or worse than prior iterations, or if it is being improved iteratively at all? This question again highlights the importance of transparency from AI developers, so that end‐users can be conscientious in the use of AI. 4.3 Ethical marketing of AI With no regulatory safeguards or hurdles preventing entry of AI products into the veterinary marketplace, their adoption and use or lack thereof will largely become a function of product marketing and first‐hand use. In the AVMA Principles of veterinary medical ethics, “Advertising by veterinarians is ethical when there are no false, deceptive, or misleading statements or claims. A false, deceptive, or misleading statement or claim is one which communicates false information or is intended, through a material omission, to leave a false impression. ” This is further echoed in FDA guidance where an animal medical device is considered misbranded if, “labeling is false or misleading”. Prevention of inappropriately tested products from entering the market may not be feasible under the current regulatory landscape, but the profession and domain experts in particular can play a role in alerting proper authorities when these codes of ethical marketing have been breached. Ultimately, this will have to be met with educational marketing from domain experts (radiologists and radiation oncologists), to inform non‐domain experts of what to look for in trustworthy or questionable products, what we feel proper and ethical use for these technologies are, and which if any products we endorse as constituents of the ACVR/ECVDI.
AI within the VCPR and liability When considering the use of AI, we must consider who or what is liable in the case of a negative outcome. Is liability held by the veterinarian using the AI, the hospital who employs it, or the AI developer? In order to approach these questions, we must first define the veterinary‐client‐patient relationship (VCPR) and who holds it. In order for a VCPR to be established, several criteria must be met. A veterinarian must have assumed responsibility for clinical decisions regarding patient health with client consent. That veterinarian must have adequate patient knowledge to initiate a preliminary diagnosis on the patient's condition, including timely examination of the patient and knowledge of how the patient is kept and cared for. The veterinarian should be available for follow‐up assessment or arrange for continued care. The veterinarian oversees treatment, compliance, and outcome. The veterinarian must maintain patient records. In the current setting of AI use, the receiving veterinarian (a non‐radiologist, and often a non‐specialist veterinarian) holds the VCPR, and as a result the liability as they will decide upon diagnostic and treatment choices. According the American Veterinary Medical Association (AVMA) Principles of Veterinary Medical Ethics , “ When appropriate, attending veterinarians are encouraged to seek assistance in the form of consultations and/or referrals. A decision to consult or refer is made jointly by the attending veterinarian and the client. When a private clinical consultation occurs, the attending veterinarian continues to be primarily responsible for the case and maintaining the VCPR. Consultations usually involve the exchange of information or interpretation of test results. However, it may be appropriate or necessary for consultants to examine patients. When advanced or invasive techniques are required to gather information or substantiate diagnoses, attending veterinarians may refer the patients. A new VCPR is established with the veterinarian to whom a case is referred. ” We should then consider exactly what AI decision‐making is within these guidelines. Is this a telemedicine referral with a non‐human colleague? Is this a consult? Is this simply a diagnostic akin to running a blood chemistry? If this is a consult or referral, it would seem that owner consent to use of AI in decision making is required and should be disclosed and sought prior to use. The current paradigm certainly involves the exchange of information and interpretation of imaging by AI, which sounds a lot like a consult. This again begs the question of whether AI does, should, or even can hold the VCPR in these interactions. Also, within the AVMA Principles is, “Referral is the transfer of responsibility of diagnosis and treatment from a referring veterinarian to a receiving veterinarian. ” While this would seemingly not include an AI product or company as constituting referral from the standpoint of AI not being a veterinarian, it could be considered a referral if responsibility of diagnosis and treatment is shifted to the AI. We must also acknowledge that these principles were created at a time when the concept of AI producing a diagnosis was not yet a reality. If AI is performing tasks similar to what a receiving consulting veterinarian would, it begs the question whether a new VCPR is or should be established by the AI/developer. If so, then there should be resultant shifting of liability away from the receiving veterinarian to the AI/developer in instances of their use. This is particularly relevant in the current landscape where the receiving veterinarian holds the VCPR and resultant liability, but may not be provided with appropriate information or have the ability to assess the validity of an AI product nor the diagnoses and recommendations it produces. It becomes increasingly clear that these are issues that need to be addressed proactively, ideally prior to release of these products into the market. The lack of intentional planning on these issues runs the risk of it being brought to task by an index case guided by evolving tort law, with the untoward effect of not only individual patient, client, or veterinarian harm, but stifling of continued use and advancement of these new technologies. Harm from AI could be caused in many ways. An AI product could be non‐contributory to patient diagnosis/care, merely adding to client costs. It could yield a false‐positive diagnosis, leading to subsequent tests or interventions (up to and including surgery or euthanasia). It could lead to false‐negative results, delaying necessary diagnosis and care. It could be applied to inappropriate datasets or populations (e.g. applying an algorithm to a non‐diagnostic image with questionable results). It could be employed incorrectly by a human user. The AI could simply be faulty and generate erroneous decisions. Incorrect results could lead to subsequent faulty data in research projects sampling this data. In human medicine, the Canadian Association of Radiologists have produced a white paper on this topic, with a proposed scheme to address who holds the responsibility/liability in the instance of misdiagnosis or malpractice when AI is employed (Table ). A version of this could certainly be adopted in veterinary medicine. It is important to highlight that in human medicine, the proposed liability scheme is based on radiologists/domain expert as the human in the loop, which is not the current paradigm in veterinary medicine. Despite this being a more robust system in human medicine, when surveyed, non‐radiologist clinicians in human medicine felt that liability for errors when using AI should fall on the hospital (65.9%), the radiologist (54.5%), and the AI developer (44.3%). Only 4.5% felt that liability should be held by the referring physician. If our human colleagues are not comfortable using AI reports alone from products which are more rigorously tested, why should veterinarians (in particular non‐domain experts) feel comfortable using products with no oversight? We must address who/what holds the VCPR, what constitutes a referral, and who holds the resultant liability when AI is used. In particular, so our non‐domain expert colleagues are not left holding the liability bag. We should ask ourselves as a profession, is it ethical to shift liability to general practitioners, emergency veterinarians, or non‐imaging specialists who use a product whose accuracy is not published or otherwise known? Is the average veterinary clinic or hospital (which may be individually or corporate owned) prepared to accept liability of their use? If AI begins to perform similar diagnostic tasks to a veterinarian, shouldn't AI/developers also then be held to the same standards including the guiding principle of “first, do no harm”?
Regulatory oversight Regulation and oversight, or lack thereof, is one of the key differences when considering the use of AI in veterinary medicine versus human medicine. The regulatory framework in human medicine is actively evolving, as our human medical colleagues navigate how to best employ and safeguard these emerging technologies. In order to understand ethical implications in veterinary medicine, we must consider the current regulatory landscape. The United States Food and Drug Administration (FDA) regulates medical devices. Section 201(h) of U.S. Federal Food, Drug, and Cosmetics Act (FDCA), defines a medical device as, “as instrument, apparatus, implement, machine, contrivance, implant, in vitro agent or other similar or related article…intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals…intended to affect the structure or any function of the body of man or other animals”. AI and similar products have come to be regarded as software as a medical device (SaMD). , However, the 21 st Century Cures Act (Pub. L. No. 114–255) introduced an exemption in section 520(o) of the FDCA for some software. In order to be exempt from software being considered a medical device, it must be used for, “…administrative support…”, “…to serve as electronic patient records and is not intended to interpret or analyze patient records…”, “… transferring, storing, converting formats, or displaying clinical laboratory test or other device data and results…”, and “…for maintaining or encouraging a healthy lifestyle and is unrelated to diagnosis, cure, mitigation, prevention, or treatment of a disease or condition”. Section 520(o)(1)(E) further states that to be exempt software must meet several criteria simultaneously, including, “…not intended to acquire, process, or analyze a medical image or signal from an in vitro diagnostic device or a pattern or signal from a signal acquisition system…”, “…for the purpose of… supporting or providing recommendations to a healthcare professional about prevention, diagnosis, or treatment of a disease or condition”, and is, “…for the purpose of…enabling such healthcare professionals to independently review the basis for such recommendations that such software presents so that is not the intent that such healthcare professionals rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient”. These guidelines clearly support the radiologist in the loop concept, and that AI products used to generate diagnoses would be classified as medical devices (and subject to their regulation). The FDA has gone on to provide clarifications that echo guiding principles in AI development proposed earlier in this manuscript, advising that, “…the software developer should describe the underlying data used to develop the algorithm and should include plain language descriptions of the logic or rationale used by an algorithm to render a recommendation”, and that, “…the sources supporting the recommendation or the sources underlying the basis for the recommendation should be identified and available to the intended user”. Transparency of data sets, avoidance of black box development, and access to development information are clear guidelines from the FDA. In order to foster innovation, the FDA also offers that it, “…does not intend to enforce compliance even if the last criterion is not met (decisions primarily on software recommendations) as long as it is used for non‐serious or non‐critical conditions”. This last statement has led to the creation of a risk stratification framework for the use of AI. Adapted versions of these frameworks are provided in Tables and . Applying these frameworks to veterinary medicine, it is evident that almost every possible decision of current AI products would fall into medium risk (decisions having diagnostic or therapeutic purposes), and many would fall into higher or highest risk. Remembering the “Euthanasia Principle”, patients with AI diagnoses such as “pulmonary nodules‐metastases”, “aggressive bone lesion”, or “mechanical ileus/obstruction” may lead to euthanasia or invasive treatment depending on prognosis and client finances. As many decisions in veterinary medicine must be made at a single visit, there are very few if any AI‐assisted imaging diagnoses that are low risk. Coupled with what is often little to no follow up on patients or their diagnoses, there are less checks and balances to catch an incorrect AI diagnosis vs. human medicine. Applying current FDA guidelines for human AI products, AI as it is currently being employed in veterinary medicine should be considered a medical device/SaMD and subject to appropriate testing, oversight, and regulation. The lack of such regulation is another key difference between human and veterinary medicine. The FDA currently has no requirements for pre‐market approval of medical devices intended for animal use. This means there are no restrictions to bringing an AI product to the veterinary market, and no safeguards to ensure proper testing, accuracy, or performance. This begs some key questions. In the absence of any governmental regulatory framework to safeguard veterinary AI products, can we rely on AI developers alone to properly test and ethically deploy these products? How can an end user properly evaluate AI products for themselves in the absence of necessary data? Is it ethical to test AI products on clinical patients without consent? How and to what extent should these considerations be disclosed to owners when AI products are used in the diagnosis of their non‐human family? How can risk be properly disclosed if this information is not known by the end‐user veterinarian or provided by the AI developer? Barring large‐scale legislative changes to safeguard patients and their owners, it becomes the responsibility of the profession to safeguard our patients, our clients, and ourselves from potentially harmful AI products. This should be led by domain experts, namely the ACVR/ECVDI and their constituency, at least in how it relates to diagnostic imaging and radiation oncology. Guidelines must be developed for what to look for in a validated AI product, how to identify questionable products, how to assess performance, error reporting, and future ethical development. While the ACVR/ECVDI are not regulatory bodies, criteria could be developed which AI developers could meet, similar to AAFCO statements in veterinary nutrition. This would at least provide some assurance that a product has met minimum requirements of what domain experts deem sufficient for preliminary use. Against the backdrop of otherwise absent regulations, it further supports the necessity of the radiologist in the loop when AI products are employed in live patients for the foreseeable future. AI developers with ethical aims should seek involvement and adoption by domain experts as the BP path forward. In human medicine, this is further regulated whether an AI is “locked” once released to market (i.e.; the algorithm does not change over time), versus additional regulatory approvals when substantive changes to the AI are made. The advantage of AI is the ability for it to continuously learn and improve over time. With no oversight how is the end user to know if an AI is performing better or worse than prior iterations, or if it is being improved iteratively at all? This question again highlights the importance of transparency from AI developers, so that end‐users can be conscientious in the use of AI.
Ethical marketing of AI With no regulatory safeguards or hurdles preventing entry of AI products into the veterinary marketplace, their adoption and use or lack thereof will largely become a function of product marketing and first‐hand use. In the AVMA Principles of veterinary medical ethics, “Advertising by veterinarians is ethical when there are no false, deceptive, or misleading statements or claims. A false, deceptive, or misleading statement or claim is one which communicates false information or is intended, through a material omission, to leave a false impression. ” This is further echoed in FDA guidance where an animal medical device is considered misbranded if, “labeling is false or misleading”. Prevention of inappropriately tested products from entering the market may not be feasible under the current regulatory landscape, but the profession and domain experts in particular can play a role in alerting proper authorities when these codes of ethical marketing have been breached. Ultimately, this will have to be met with educational marketing from domain experts (radiologists and radiation oncologists), to inform non‐domain experts of what to look for in trustworthy or questionable products, what we feel proper and ethical use for these technologies are, and which if any products we endorse as constituents of the ACVR/ECVDI.
GENERAL RECOMMENDATIONS FOR ETHICAL USE OF AI IN VETERINARY MEDICINE AI technology does no harm. Radiologists and other domain experts should be “in the loop” from start to finish of development, deployment, and supervision of AI products. AI companies and their products should be transparent, and provide/disclose information relating to data use, validation and training, calibration, outcomes, and errors. AI products should be subject to peer review (ideally prior to entry into the market for clinical use) and guided by position/white papers by domain experts (e.g. ACVR/ECVDI) when available. When medical errors occur, a root‐cause analysis should be performed to identify to points at which decision making was faulty. Ideally this would be shared on a national database. Companies should be transparent when errors occur. Until further progress is made, the profession should strive to have radiologists involved in final imaging diagnosis in conjunction with AI, rather than by AI alone.
The authors have declared no conflict of interest.
Category 1 Conception and Design: Cohen, Gordon Analysis of Data: Cohen, Gordon Interpretation of Data: Cohen, Gordon Category 2 Drafting the Article: Cohen, Gordon Revising It Critically for Important Intellectual Content: Cohen, Gordon Category 3 Final Approval of the Version to be Published: Cohen, Gordon Category 4 Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved: Cohen, Gordon
|
A novel
|
b0943dff-d89c-42f9-969f-48845f845546
|
10107862
|
Anatomy[mh]
|
INTRODUCTION Soft tissue myoepitheliomas (STMs) have gained increasing interest and characterization within recent literature. STMs demonstrate predominantly or exclusively myoepithelial differentiation; the histogenesis of which has not been well elucidated. The majority of STMs occur on the limbs and limb girdles, arising within the subcutaneous and deep soft tissues as well as bone. , A comprehensive molecular profiling of STM indicates that ~45% of cases harbor EWSR1 gene rearrangement with a variety of different partner genes such as POU5F1, PBX1 , ZNF444 , ATF1 , and KLF17 . , Furthermore, there have been reports of EWSR1 negative gene fusion STM, including SRF::E2F1 and those involving FUS gene rearrangements. Rearrangements involving PLAG1 , frequently identified in myoepitheliomas and myoepithelial carcinoma of salivary gland origin, are not observed in STMs, but have been detected in a subset of cutaneous mixed tumors. , Noteworthy, a rare but distinctive benign cutaneous variant of myoepithelial neoplasm, cutaneous syncytial myoepithelioma, displays a predilection for the distal extremities in adults and exhibits a unique histomorphology and immunophenotypic profile characterized by S100 and epithelial membrane antigen (EMA) reactivity and infrequent keratin expression. Cutaneous syncytial myoepitheliomas harbor recurrent EWSR1::PBX3 gene fusion in virtually all cases, suggesting that they are phenotypically and molecularly distinct from STM. , The histopathological spectrum of STM varies from spindled, ovoid, or epithelioid cells arranged in cords or nests within a hyalinized stroma or arranged in reticular or trabecular pattern within a myxoid stroma. The lesional cells display positive immunohistochemical reactivity to keratins and S100 with variable expression of EMA, glial fibrillary acid protein (GFAP), and SOX10. Herein, we describe a unique case of a spindle cell myoepithelioma harboring a novel IRF2BP2::CDX2 fusion, arising within the intravascular space of the hand and colliding with papillary endothelial hyperplasia (PEH), and Masson tumor.
MATERIAL AND METHODS 2.1 Immunohistochemistry Immunohistochemical stains using the following commercially available antibodies were performed using the Leica Bond III autostainers (Leica Biosystems): CD34 (Cell Marque, QBEnd/10, 1:400), smooth muscle actin (SMA; Dako, 1A4, 1:600), desmin (Dako, D33, 1:200), ERG (BioCare M, 9FY, 1:50), S100 (Dako, R poly, 1:2000), AE1/E3 (Dako, AE1&AE3, 1:1200), Ki‐67 (Dako, MIB‐1, 1:400), EMA (Dako, E29, 1:700), BCL‐2 (Dako, 124, 1:1200), MUC4 (Cell Marque, 8G‐7, 1:25), SOX‐10 (BioCare, BC34, 1:200), GFAP (Dako, R poly, 1:10000), calponin (Dako, CALP, 1:600), p63 (Biocare, 4A4, 1:300), CD163 (Leica, 10D6, 1:900), and CDX2 (BioGeneX, M mono, 1:300). All protocols were developed by and performed at the OSU Wexner Medical Center Clinical Laboratory, 410 West 10th Avenue, Doan Hall 310, Columbus, OH 43210. All positive and negative controls showed appropriate staining. 2.2 Fluorescence in situ hybridization Translocation involving the EWSR1 gene at 22q12 was evaluated with interphase fluorescent in situ hybridization (FISH) on formalin‐fixed and paraffin‐embedded (FFPE) tissue sections using a commercially available LSI break‐apart probe set (Vysis). The LSI EWSR1 dual color, break‐apart rearrangement probe consisted of a mixture of two FISH DNA probes. The first probe, a ~500 kb probe labeled in SpectrumOrange, flanked the 5′ side of the EWSR1 gene and extended inward into Intron 4. The second probe, a ~1100 kb probe labeled in SpectrumGreen, flanked the 3′ side of the EWSR1 gene. Using a panel of normal tissues and well‐defined sarcomas not expected to harbor an EWSR1 translocation, split signals were identified in fewer than 15% of cells. 2.3 Next‐generation sequencing and reverse transcriptase polymerase chain reaction validation Microscopic examination of an hematoxylin and eosin (H&E) stained slide was performed by a pathologist to identify areas of tumor for macrodissection and subsequent RNA extraction. Areas of tumor consisting of ≥10% neoplastic cells were required for testing. RNA was extracted from 5‐μm thick FFPE unstained slides using Qiagen miRNeasy FFPE kits. The SARCP gene fusion panel was used which consists of a targeted, custom‐designed, polymerase chain reaction (PCR)‐based panel designed in collaboration with QIAGEN. The panel assesses 138 genes for fusions (>280 fusion variants) in 39 sarcoma types. For a list of the genes and the specific targeted regions of each gene see www.mayocliniclabs.com (Test ID SARCP). In brief, the QIAseq‐Targeted RNAscan Panel utilizes a single primer extension target enrichment and unique molecular identifier technology to identify gene fusions. RNA samples were converted to double‐stranded cDNA, end repaired, and A‐tailed. The cDNA was ligated with a UMI and separate sample index. Adapter‐ligated cDNA molecules were subject to limited target enrichment using single primer extension. Universal PCR was carried out to amplify the library and add a second sample index. The final library was then sequenced on a MiSeq sequencer (Illumina). SARCP next‐generation sequencing (NGS) data were bioinformatically analyzed using SeekFusion an internal pipeline that uses a combination of traditional alignment and de novo assembly‐based approaches. Fusion/rearrangement nomenclature was based on build GRCh37 (hg19) and RefSeq reference transcripts (RefSeq accession numbers) for the genomic coordinates (chr1:g234744193) for IRF2BP2 and (chr13:g28539152) for CDX2 corresponding to transcripts NM_182972 and NM_001265, respectively. 2.3.1 Reverse transcriptase ‐PCR and sanger sequencing Reverse transcriptase‐PCR (RT‐PCR) and Sanger sequencing were performed at the Mayo Clinical Laboratories (Rochester, Minnesota) to confirm the NGS assay‐positive results. RNA was extracted from FFPE unstained slides of the tumor using Qiagen miRNeasy FFPE kits. RNA was converted to cDNA using the SuperScript III VILO cDNA synthesis kit. The cDNA was used as the genomic template to specifically amplify the fusion of interest using the following primers designed for the IRF2BP2::CDX2 fusion (Exons 1–2) developed based on the IGV breakpoint sequence (SARCP NGS results): forward Primer: IRF2BP2 Exon 1– CAGGCAGGTTGTTGGGTTTC and reverse Primer: CDX2 Exon 2– TTTCCTCCGGATGGTGATGTAG. Primers were checked for specificity with BLAT and SNP checks. The PCR product was purified using the AMPure XP purification system (Beckman Coulter). Sanger sequencing was performed using universal primers that bind to the UPS tags. The Sanger sequencing product was purified using the Agencourt CleanSEQ Dye‐Terminator removal system (Beckman Coulter) and then loaded onto the ABI 3730xl DNA Analyzer (ThermoFisher) for capillary electrophoresis. Sequencing electropherograms were viewed using the Mutation Surveyor v4.09 software (SoftGenetics).
Immunohistochemistry Immunohistochemical stains using the following commercially available antibodies were performed using the Leica Bond III autostainers (Leica Biosystems): CD34 (Cell Marque, QBEnd/10, 1:400), smooth muscle actin (SMA; Dako, 1A4, 1:600), desmin (Dako, D33, 1:200), ERG (BioCare M, 9FY, 1:50), S100 (Dako, R poly, 1:2000), AE1/E3 (Dako, AE1&AE3, 1:1200), Ki‐67 (Dako, MIB‐1, 1:400), EMA (Dako, E29, 1:700), BCL‐2 (Dako, 124, 1:1200), MUC4 (Cell Marque, 8G‐7, 1:25), SOX‐10 (BioCare, BC34, 1:200), GFAP (Dako, R poly, 1:10000), calponin (Dako, CALP, 1:600), p63 (Biocare, 4A4, 1:300), CD163 (Leica, 10D6, 1:900), and CDX2 (BioGeneX, M mono, 1:300). All protocols were developed by and performed at the OSU Wexner Medical Center Clinical Laboratory, 410 West 10th Avenue, Doan Hall 310, Columbus, OH 43210. All positive and negative controls showed appropriate staining.
Fluorescence in situ hybridization Translocation involving the EWSR1 gene at 22q12 was evaluated with interphase fluorescent in situ hybridization (FISH) on formalin‐fixed and paraffin‐embedded (FFPE) tissue sections using a commercially available LSI break‐apart probe set (Vysis). The LSI EWSR1 dual color, break‐apart rearrangement probe consisted of a mixture of two FISH DNA probes. The first probe, a ~500 kb probe labeled in SpectrumOrange, flanked the 5′ side of the EWSR1 gene and extended inward into Intron 4. The second probe, a ~1100 kb probe labeled in SpectrumGreen, flanked the 3′ side of the EWSR1 gene. Using a panel of normal tissues and well‐defined sarcomas not expected to harbor an EWSR1 translocation, split signals were identified in fewer than 15% of cells.
Next‐generation sequencing and reverse transcriptase polymerase chain reaction validation Microscopic examination of an hematoxylin and eosin (H&E) stained slide was performed by a pathologist to identify areas of tumor for macrodissection and subsequent RNA extraction. Areas of tumor consisting of ≥10% neoplastic cells were required for testing. RNA was extracted from 5‐μm thick FFPE unstained slides using Qiagen miRNeasy FFPE kits. The SARCP gene fusion panel was used which consists of a targeted, custom‐designed, polymerase chain reaction (PCR)‐based panel designed in collaboration with QIAGEN. The panel assesses 138 genes for fusions (>280 fusion variants) in 39 sarcoma types. For a list of the genes and the specific targeted regions of each gene see www.mayocliniclabs.com (Test ID SARCP). In brief, the QIAseq‐Targeted RNAscan Panel utilizes a single primer extension target enrichment and unique molecular identifier technology to identify gene fusions. RNA samples were converted to double‐stranded cDNA, end repaired, and A‐tailed. The cDNA was ligated with a UMI and separate sample index. Adapter‐ligated cDNA molecules were subject to limited target enrichment using single primer extension. Universal PCR was carried out to amplify the library and add a second sample index. The final library was then sequenced on a MiSeq sequencer (Illumina). SARCP next‐generation sequencing (NGS) data were bioinformatically analyzed using SeekFusion an internal pipeline that uses a combination of traditional alignment and de novo assembly‐based approaches. Fusion/rearrangement nomenclature was based on build GRCh37 (hg19) and RefSeq reference transcripts (RefSeq accession numbers) for the genomic coordinates (chr1:g234744193) for IRF2BP2 and (chr13:g28539152) for CDX2 corresponding to transcripts NM_182972 and NM_001265, respectively. 2.3.1 Reverse transcriptase ‐PCR and sanger sequencing Reverse transcriptase‐PCR (RT‐PCR) and Sanger sequencing were performed at the Mayo Clinical Laboratories (Rochester, Minnesota) to confirm the NGS assay‐positive results. RNA was extracted from FFPE unstained slides of the tumor using Qiagen miRNeasy FFPE kits. RNA was converted to cDNA using the SuperScript III VILO cDNA synthesis kit. The cDNA was used as the genomic template to specifically amplify the fusion of interest using the following primers designed for the IRF2BP2::CDX2 fusion (Exons 1–2) developed based on the IGV breakpoint sequence (SARCP NGS results): forward Primer: IRF2BP2 Exon 1– CAGGCAGGTTGTTGGGTTTC and reverse Primer: CDX2 Exon 2– TTTCCTCCGGATGGTGATGTAG. Primers were checked for specificity with BLAT and SNP checks. The PCR product was purified using the AMPure XP purification system (Beckman Coulter). Sanger sequencing was performed using universal primers that bind to the UPS tags. The Sanger sequencing product was purified using the Agencourt CleanSEQ Dye‐Terminator removal system (Beckman Coulter) and then loaded onto the ABI 3730xl DNA Analyzer (ThermoFisher) for capillary electrophoresis. Sequencing electropherograms were viewed using the Mutation Surveyor v4.09 software (SoftGenetics).
Reverse transcriptase ‐PCR and sanger sequencing Reverse transcriptase‐PCR (RT‐PCR) and Sanger sequencing were performed at the Mayo Clinical Laboratories (Rochester, Minnesota) to confirm the NGS assay‐positive results. RNA was extracted from FFPE unstained slides of the tumor using Qiagen miRNeasy FFPE kits. RNA was converted to cDNA using the SuperScript III VILO cDNA synthesis kit. The cDNA was used as the genomic template to specifically amplify the fusion of interest using the following primers designed for the IRF2BP2::CDX2 fusion (Exons 1–2) developed based on the IGV breakpoint sequence (SARCP NGS results): forward Primer: IRF2BP2 Exon 1– CAGGCAGGTTGTTGGGTTTC and reverse Primer: CDX2 Exon 2– TTTCCTCCGGATGGTGATGTAG. Primers were checked for specificity with BLAT and SNP checks. The PCR product was purified using the AMPure XP purification system (Beckman Coulter). Sanger sequencing was performed using universal primers that bind to the UPS tags. The Sanger sequencing product was purified using the Agencourt CleanSEQ Dye‐Terminator removal system (Beckman Coulter) and then loaded onto the ABI 3730xl DNA Analyzer (ThermoFisher) for capillary electrophoresis. Sequencing electropherograms were viewed using the Mutation Surveyor v4.09 software (SoftGenetics).
RESULTS 3.1 Case report The present case features a 52‐year‐old male who presented with a right index finger mass (Figure ). The mass had been present since adolescence (more than 20 years ago) remaining constant in size without associated pain or numbness. An x‐ray was performed to reveal a focal, ovoid, soft tissue swelling along the radial aspect of the second digit at the level of the proximal interphalangeal joint without evidence of dislocation or underlying osseous changes (Figure ). An excisional biopsy was performed with negative margins and sent for pathological examination. Gross examination revealed a cystic, tan, soft tissue lesion measuring 1.6 × 1.2 × 0.8 cm. The cyst wall was 0.1 cm in maximum thickness with a smooth tan lining. Histopathological evaluation revealed a well‐circumscribed tumefactive lesion, composed predominantly banal spindle cell proliferation with cells arranged in tight fascicles, colliding with areas of PEH in association with an organizing thrombus. The less dominant areas consisted of spindled cells within a prominent myxoid stroma arranged in a fine reticular growth pattern as well as cords of banal cells within a hyalinized stroma with minimal pleomorphism and no appreciable increased mitotic rate or necrosis (Figure ). By immunohistochemistry, the lesional cells were positive for keratin‐AE1/AE3, EMA, S100, and glial fibrillary acid protein (GFAP) as well as CK7, SOX10, calponin, and BCL‐2 and negative for negative for SMA, desmin, p63, CD163 (positive expression identified in intralesional histiocytes), MUC4, CD34, and ERG (positive endothelial lining of the vessel) (Figure ). The Ki‐67 proliferative index was estimated at ~1% by manual quantitation. Taken together, the overall features were those of a benign intravascular spindle cell myoepithelioma of soft tissue in collision with PEH (Masson tumor). In view of the strikingly unique location of this tumor and the histopathologic mimicry with the salivary gland counterpart, we considered the possibility of a vascular tumor emboli from an otherwise pleomorphic adenoma (metastasizing pleomorphic adenoma) but examination of the head and neck region and MRI of the chest, abdomen, and pelvis, were negative for any masses. FISH for EWSR1 gene rearrangement was performed due to the identification of EWSR1 gene rearrangements arising in approximately half of STMs. Translocation involving the EWSR1 gene at 22q12 was evaluated with interphase FISH and no break‐apart signals for the EWSR1 gene were detected above background. NGS performed at the Mayo Clinic Laboratories identified the IRF2BP2::CDX2 fusion (Figure ) involving the rearrangement of the IRF2BP2 gene at Exon 1 (chr1:g234744193) and the CDX2 gene at Exon 2 (chr13:g28539152). This novel gene fusion was confirmed by RT‐PCR and Sanger sequencing. Immunohistochemistry for CDX2 was negative. Taken together, the pathological diagnosis was intravascular spindle cell myoepithelioma of soft tissue with associated PEH, completely excised with negative margins. The patient has remained free from clinical recurrence, 3 years after complete surgical excision.
Case report The present case features a 52‐year‐old male who presented with a right index finger mass (Figure ). The mass had been present since adolescence (more than 20 years ago) remaining constant in size without associated pain or numbness. An x‐ray was performed to reveal a focal, ovoid, soft tissue swelling along the radial aspect of the second digit at the level of the proximal interphalangeal joint without evidence of dislocation or underlying osseous changes (Figure ). An excisional biopsy was performed with negative margins and sent for pathological examination. Gross examination revealed a cystic, tan, soft tissue lesion measuring 1.6 × 1.2 × 0.8 cm. The cyst wall was 0.1 cm in maximum thickness with a smooth tan lining. Histopathological evaluation revealed a well‐circumscribed tumefactive lesion, composed predominantly banal spindle cell proliferation with cells arranged in tight fascicles, colliding with areas of PEH in association with an organizing thrombus. The less dominant areas consisted of spindled cells within a prominent myxoid stroma arranged in a fine reticular growth pattern as well as cords of banal cells within a hyalinized stroma with minimal pleomorphism and no appreciable increased mitotic rate or necrosis (Figure ). By immunohistochemistry, the lesional cells were positive for keratin‐AE1/AE3, EMA, S100, and glial fibrillary acid protein (GFAP) as well as CK7, SOX10, calponin, and BCL‐2 and negative for negative for SMA, desmin, p63, CD163 (positive expression identified in intralesional histiocytes), MUC4, CD34, and ERG (positive endothelial lining of the vessel) (Figure ). The Ki‐67 proliferative index was estimated at ~1% by manual quantitation. Taken together, the overall features were those of a benign intravascular spindle cell myoepithelioma of soft tissue in collision with PEH (Masson tumor). In view of the strikingly unique location of this tumor and the histopathologic mimicry with the salivary gland counterpart, we considered the possibility of a vascular tumor emboli from an otherwise pleomorphic adenoma (metastasizing pleomorphic adenoma) but examination of the head and neck region and MRI of the chest, abdomen, and pelvis, were negative for any masses. FISH for EWSR1 gene rearrangement was performed due to the identification of EWSR1 gene rearrangements arising in approximately half of STMs. Translocation involving the EWSR1 gene at 22q12 was evaluated with interphase FISH and no break‐apart signals for the EWSR1 gene were detected above background. NGS performed at the Mayo Clinic Laboratories identified the IRF2BP2::CDX2 fusion (Figure ) involving the rearrangement of the IRF2BP2 gene at Exon 1 (chr1:g234744193) and the CDX2 gene at Exon 2 (chr13:g28539152). This novel gene fusion was confirmed by RT‐PCR and Sanger sequencing. Immunohistochemistry for CDX2 was negative. Taken together, the pathological diagnosis was intravascular spindle cell myoepithelioma of soft tissue with associated PEH, completely excised with negative margins. The patient has remained free from clinical recurrence, 3 years after complete surgical excision.
DISCUSSION Soft tissue neoplasms involving the hand and digits are relatively common and comprise a heterogeneous group of entities with distinct histogenesis involving the skin, subcutaneous tissue, tendons, vasculature, and nerves with differing implications for treatment of prognosis. STM is a relatively rare entity arising as a painless mass within the subcutaneous and deep soft tissue. , , , , , Although diverse in location, STM share morphologic and immunophenotypic characteristics with myoepitheliomas of the salivary gland. The histopathological features display great heterogeneity with a predominantly reticular growth pattern consisting of ovoid, eosinophilic, or spindled cells within a collagenous or chondromyxoid matrix. , , , , Heterologous differentiation including squamous, adipocytic, osseous, and cartilaginous elements may occur in a minority of myoepitheliomas. , Lesional cells display frequent expression of broad‐spectrum keratins, S100, EMA, and GFAP. , , The histopathological features consisting of spindle and epithelioid cells in a reticular pattern within a myxoid stroma and hyalinized stroma, in concert with prototypic immunophenotypic profile (of co‐expression for S100 and keratins) were consistent with STM. Unique to this case, however, is the complete encapsulation of the mass by a vascular structure indicating that the mass arose within the vessel. PEH or Masson tumor is a reactive lesion arising from the organization and recanalization of a thrombus. Located within the vessel of the index case was a focal area of papillary fronds covered by a single layer of endothelial cells consistent with a secondary component of PEH. Approximately half of the reported cases of myoepithelioma, myoepithelial carcinoma, and mixed tumor of soft tissue harbor recurrent EWSR1 gene rearrangements. Cases of EWSR1 rearrangement‐negative STMs have been reported. The SRF::E2F1 gene fusion was identified in a case of a spindle cell myoepithelioma of the iliac region and a mixed type tumor of soft tissue of the foot also found to harbor a second gene fusion, FUS::KLF17 . Furthermore, the novel OGT::FOXO3 fusion was reported in two cases characterized as a myoepithelioma‐like hyalinizing epithelioid tumor involving the subcutis of the palmar region of the hand. To date, the IRF2BP2::CDX2 fusion has not been reported in any of the precedent cases of STM. Moreover, to the best of our knowledge, recurrent fusions involving the IRF2BP2 and CDX2 genes have not been described in human neoplasia. Interferon regulatory factor 2‐binding protein 2 (IRF2BP2) encodes a member of the IRF2BP family of transcription factors which function to regulate different cell signaling pathways. Dysregulation of IRFBP2 signaling is thought to be a key player in tumorigenesis. , Caudal type homeobax 2 ( CDX2 ), at locus 13q12.2, is a member of the caudal‐related homeobox transcription factor gene family, that plays an important role in the embryogenesis of the intestinal tract as well as regulation of intestinal‐specific genes in cellular growth and differentiation. Dysregulation of the CDX2 gene has been implicated in various malignancies including acute myeloid leukemia, colorectal carcinoma, pancreatic carcinoma, paranasal carcinoma, cervical carcinoma, and uveal neoplasms. Translocations involving the fusion partner IRF2BP2 have been described in literature. The IRF2BP2::NTRK1 fusion has been identified in various cancers including nonsmall cell lung cancer and NTRK ‐rearranged thyroid carcinomas. In a sarcoma panel validation study, the IRF2BP2::CDX1 fusion was detected in one case of soft tissue myoepithelioma that was negative for EWSR1 rearrangements by FISH. The t(1;5) IRF2BP2::CDX1 fusion has also been identified in one case of mesenchymal chondrosarcoma which display histomorphology similar to classic mesenchymal chondrosarcoma, consisting of hyperchromatic small round to spindled cells arranged in clusters or fascicles with variably formed hyaline cartilage and staghorn vessels, a morphology distinctly different from the present case. The differential diagnosis of intravascular spindle cell neoplasms is broad and often context dependent. Intravascular fasciitis (considered a variant of nodular fasciitis) is far more common and shows a variable cellular spindled proliferation with associated interstitial collagen, mucous cysts, and inflammation; the degree of which depends on the stage in the temporal evolution of the tumor. However, these lesions are frequently positive for SMA but negative for S100 and keratin. Molecularly, they tend to harbor USP6 gene rearrangements, a profile that distinguishes them from STM. Extraskeletal myxoid chondrosarcoma (EMC) shares common morphology with myoepitheliomas. Rearrangements in the NR4A3 gene are commonly encountered in EMC with the majority of cases harboring the EWSR1::NR4A3 variant fusion. Clinically, EMC tends to occur in deep‐seated soft tissues and rarely in superficial locations or digits. Notwithstanding, both entities display eosinophilic spindled to ovoid cells distributed in cords within a myxoid matrix. Moreover, diffuse immunohistochemical reactivity to S100 and keratins are observed in myoepitheliomas and rarely in EMC. A variant of myoepithelial neoplasms known to occur in the skin, cutaneous syncytial myoepitheliomas, shares overlap features with STM. They comprise a group of mesenchymal neoplasms with myoepithelial differentiation and dermal involvement. They do exhibit characteristic histopathological features, consisting of syncytial growth of ovoid, eosinophilic, or spindled cells. The lesional cells share a similar immunophenotype to STM with expression of S100 and EMA. However, unlike STM, keratin expression is rare and GFAP expression is variable in cases of cutaneous myoepithelioma. Importantly, virtually all cutaneous syncytial myoepitheliomas are known to harbor EWSR1::PBX3 gene fusion while only about 45% of STM will manifest rearrangement of EWSR1 gene with multiple partner genes. Rearrangements of FUS and PLAG1 , seen in mixed tumor/chondroid syringoma, have also been detected in a subset of myoepitholiomas. , STMs behave largely in a benign fashion. However, despite their low‐grade histopathology, local recurrence can occur in up to 20% of cases and in exceptionally rare instances, may behave aggressively with metastasis. The malignant counterpart, myoepithelial carcinoma of soft tissue, is characterized by increased cytological atypia and mitotic activity compared with myoepithelioma. Our index case did not manifest any features of cellular anaplasia to the degree that would be concerning for myoepithelial carcinoma of soft tissue. Second, the long clinical history of >20 years duration of stable size without evidence of metastasis was doubly reassuring of the benign behavior of this entity. The prognostic implications including local recurrence of STM located within a vessel are not well known. It is conceivable that the index case would parallel a benign course suggesting a prognosis akin to STM occurring in other soft tissue locations reported in precedent literature, given the low‐grade histomorphology. Further, the implications of the novel IRF2BP2::CDX2 gene fusion in this entity remain to be seen. Surgical excision remains the treatment of choice for this lesion. In summary, we present a unique description of an intravascular soft tissue myoepithelioma harboring a novel IRF2BP2::CDX2 gene fusion. This case expands the differential diagnosis of intravascular lesions to now include STM. It is unlikely, though unclear, if the unique location of this entity and novel gene fusion will affect the clinical course and prognosis of the patient.
O. Hans Iwenofu, conceived the project. Ashley Patton, Xiaoyan Cui, Amy Speeckaert, Micayla Zeltman, Steve Oghumu, and O. Hans Iwenofu provided essential tools/data. Ashley Patton and O. Hans Iwenofu wrote the article. All authors approved the final article.
The authors declare that they do not have any conflicts of interest.
|
Incidence in pharmacoepidemiology: A conceptual framework for incidence of a single substance or group of substances with statins as an example
|
0487a1b9-97ae-40a3-bbe6-2abab15396f0
|
10107903
|
Pharmacology[mh]
|
INTRODUCTION In pharmacoepidemiology, the concept of incidence—a new case of drug use—is important from several different perspectives. A new case of drug use defines the start of a specific period of drug exposure. It also represents a decision by the prescriber to either treat a patient for the first time with a specific substance or group of substances (the first‐ever case of drug treatment with this substance of this patient) or to initiate a new period of drug treatment. In pharmacoepidemiology, dispensations of drugs are commonly used as a proxy for actual drug use over the period covered by the amount dispensed. The first dispensation of a drug is probably more sensitive to changes in prescribing habits than subsequent prescriptions or successive dispensations of the same prescription. Repeated treatment episodes with the substance, or a group of substances, over time with periods without treatment in between have to be analysed when studying incidence in pharmacoepidemiology. For instance, a new case of drug treatment should be differentiated from continuing treatment. In addition, first‐ever use has to be distinguished from a recurrent treatment episode. In epidemiology, measures of disease frequency such as incidence and prevalence are well defined, based initially on a simple illness–death model (also known as the disability model). Drug use is often intermittent for chronic diseases, either due to changes in the severity of the disease or non‐compliance. Drugs are mainly used to treat a disease or as secondary prevention in order to prevent possible future complications of a disease. However, they are also used for primary prevention of future disease in otherwise healthy individuals with an increased risk of becoming ill. The original simple model of incidence based on infectious diseases with immunity thus needs to be extended to be applicable for drug treatment where we consider treatment status instead of disease status (see Figure ). The definition of incidence is made more complicated because multiple drugs can be combined or used consecutively to treat a disease. It is essential to consider whether a new case of drug use representing a new case of treatment with the specific substance is preceded or not by other possible substitutes within or outside a specific pharmacological group defined, for instance, by the ATC system. A switch from one substance to another may have many different reasons. For the lipid‐lowering groups of statins (HMG‐CoA reductase inhibitors), the reasons might, for instance, be adverse drug reactions, an unsatisfactory lowering of blood lipid levels, or an increased risk for the patient of cardiovascular events. Other factors might be changes in the costs for the society or the patient, new generic competition, and changes in the pharmaceutical benefit scheme. With a strict definition of different types of new cases of drug use and a well‐defined methodology, it is possible to report incidence not only in studies of drug utilization but also as a standard measure in routine statistics of drug use. Incidence is already part of national standard annual drug utilization statistics from the National Board of Health and Welfare of Sweden, albeit only for some groups of substances. A more stringent methodology and a standardized mode of reporting the different incidences are essential when incidence becomes more widely adopted as a standard measure in publicly available drug utilization statistics. AIM This article aims to explore incidence as new cases of treatment with a specific drug or group of drugs and to develop a corresponding methodology and terminology for consistent reporting in drug utilization studies and national drug statistics. An additional aim is to illustrate this by analysing the initiation of treatment with statins in Sweden 2019. MATERIAL AND METHODS The Swedish Prescribed Drug Registry data were extracted as patient‐level data, fully anonymized and classified as statistics by the National Board of Health and Welfare. Substances were classified according to the Anatomical Therapeutic Chemical (ATC) classification system in 2020. , All first individual occurrences of the dispensation of C10AA HMG‐CoA reductase inhibitors and fixed combinations of HMG‐CoA reductase inhibitors in C10BA in Sweden for both sexes and all ages during 2019 were extracted, together with the ATC code and the number of days since the last dispensation of the same ATC code (total population = 10 230 185 with n of individuals with a dispensation of at least one statin = 1 017 058 corresponding to a 1‐year prevalence of 9.9%). In addition, the number of days since the last dispensation of any other studied substances was obtained with information on ATC code, gender, age (5‐year intervals up to ≥85) and Swedish citizen status on 1 January in 2009 and 2019. Stata was used for all data analyses. Simvastatin, pravastatin, atorvastatin and rosuvastatin in monotherapy constituted >99.9% of the 1‐year prevalence for all statins in C10AA during 2010–2019 in Sweden. The available fixed‐combination products C10BA02 simvastatin + ezetimibe and C10BA05 atorvastatin + ezetimibe represented 0.31% and 0.07% of the sale of respective statins in monotherapy (0.29% and 0.14% in 1‐year prevalence). The incidence proportion was calculated with the number of new cases (first‐ever or recurrent treatment) defined by different run‐in periods as the numerator and the population at the beginning of the year as the denominator. The positive predictive value was calculated as the ratio between the incidence proportions for different lengths of the run‐in compared with a run‐in of 10 years. It can be interpreted as the fraction of the new cases given a specific run‐in that represents first‐ever use, that is, no dispensation 10 years before the index dispensation. Using a 10‐year run‐in as a reference represented a pragmatic approximation defining users as actual first‐ever users of statins. The reason for this approach is the limitation of data available over time in many countries with national prescription databases covering individual‐level patient data of dispensations. Extending the run‐in from 10 to 13 years (the longest possible for dispensations in 2019 in Sweden) had a minimal impact on the incidence proportion (see Section ). 3.1 Methodological considerations when defining a new case of drug use Before we consider the main problem of patients being new to a specific substance or a group of substances, we briefly review the concepts of a run‐in period and incidence rates versus incidence proportion, as these are fundamental for analysing treatment initiation. 3.1.1 The effects of run‐in on different misclassifications There are several possible misclassifications when studying incidence. We have previously explored the concepts of a new case, first‐ever use and recurrent treatment and different types of misclassifications of incidence associated with varying the length of the run‐in period. A run‐in period (sometimes also called a washout period) is commonly used to differentiate between a dispensation indicating a new case of drug use and one representing a continuation of treatment. A short run‐in period will not differentiate well between first‐ever use and recurrence of treatment. With a long run‐in, a more significant fraction of new cases of drug use will represent first‐ever use. The run‐in consists of the total period without treatment and the assumed duration of the last dispensation. This pragmatic practice in register‐based studies will not be influenced by previous hoarding, change in dosage or a decision to end the treatment early (either by the prescriber or the patient). When comparing the incidence of drug use between countries and clinical settings, the assumed duration without treatment, and not the actual run‐in, must be considered since the average treatment duration of a dispensation varies between countries due to clinical practice and regulations. Suppose the average duration is 3–4 months as in Sweden due to the rules of the pharmaceutical benefit scheme. In that case, a 12‐month run‐in will usually represent a period between 8 and 12 months without treatment, while a 16‐month run‐in will represent at least a full year without treatment. If the average duration of a dispensation is 1 month, then the same run‐in period of 12 months in most cases will correspond to 11–12 months without treatment. 3.1.2 Incidence, incidence rate and proportion The incidence, the number of new cases in a defined population, is often presented as a rate or a proportion. In incidence rate, the denominator is the aggregated study time contributed by each studied individual (actual person‐time). The denominator in incidence proportion (also called the cumulative incidence) is the population at risk at the beginning of a time interval, for instance, a calendar year. For incidence proportions, individuals that emigrate or die during the studied interval will still contribute to the denominator for the entire interval. Thus, all other things being equal, the incidence proportion will be lower than the incidence rate in a population with high mortality, such as the elderly population at risk, if defined as the population at the beginning of a time interval. With a high level of immigration, the incidence proportion, all other things being equal, will be higher than the incidence rate if immigrants are not censored. If censoring for immigration, each individual should be censored in the numerator and the denominator for the length of the run‐in after the date of immigration since a prevalent user otherwise would be potentially misclassified as a new case of drug treatment. The traditional definition of incidence rate and incidence proportion in epidemiology focuses on persons at risk as the denominator. In pharmacoepidemiology (whether or not a cohort in rate or a population in a proportion), that would represent only those not classified as prevalent users. However, in drug utilization studies reporting incidence proportion, the whole population is often the denominator (see also Section ). 3.1.3 Substance or condition The reason for prescribing the substance might be considered when studying new cases of drug use if the information is available. However, this information is not registered in large claims or population databases in most instances. Where reasons for prescribing are available, they are not always reliable due to external factors such as reimbursement rules or a heavy workload influencing reporting. Linking prescriptions to specific diagnoses for the same or earlier healthcare episodes is possible in limited situations but creates considerable methodological challenges. Each prescription might be made for several different reasons, which might change over time. A disease such as depression often fluctuates in severity over several years. A new prescription leading to a dispensation, that is, a case of recurrent treatment, can then represent either a repeat treatment for the same reason or treatment with the same substance or group of substances for other reasons. 3.1.4 New on a drug or new on a group of drugs New cases of drug use can relate to a single substance or a group of substances. However, the number of new cases of a group of substances does not equal the sum of the number of new users of each substance since a patient that starts treatment with one substance might have been treated with another substance belonging to the same group earlier. When placing both individual substances and groups of these substances into a simple two‐level model, four different situations can be described: New on a group regardless of the substance— NoG New on a specified substance, whether treated earlier with another substance in the group or not— NoS New on a specified substance and new on the group— NoS_and_NoG New on specified substance and not new on group— NoS_not_NoG This classification can be exemplified as an analysis with two levels for a group with four different substances (see Table and Figure ). During the studied period of 2009–2019, with 10 years of run‐in for dispensations during 2019, only four different statins were dispensed in Sweden (Table ). Methodological considerations when defining a new case of drug use Before we consider the main problem of patients being new to a specific substance or a group of substances, we briefly review the concepts of a run‐in period and incidence rates versus incidence proportion, as these are fundamental for analysing treatment initiation. 3.1.1 The effects of run‐in on different misclassifications There are several possible misclassifications when studying incidence. We have previously explored the concepts of a new case, first‐ever use and recurrent treatment and different types of misclassifications of incidence associated with varying the length of the run‐in period. A run‐in period (sometimes also called a washout period) is commonly used to differentiate between a dispensation indicating a new case of drug use and one representing a continuation of treatment. A short run‐in period will not differentiate well between first‐ever use and recurrence of treatment. With a long run‐in, a more significant fraction of new cases of drug use will represent first‐ever use. The run‐in consists of the total period without treatment and the assumed duration of the last dispensation. This pragmatic practice in register‐based studies will not be influenced by previous hoarding, change in dosage or a decision to end the treatment early (either by the prescriber or the patient). When comparing the incidence of drug use between countries and clinical settings, the assumed duration without treatment, and not the actual run‐in, must be considered since the average treatment duration of a dispensation varies between countries due to clinical practice and regulations. Suppose the average duration is 3–4 months as in Sweden due to the rules of the pharmaceutical benefit scheme. In that case, a 12‐month run‐in will usually represent a period between 8 and 12 months without treatment, while a 16‐month run‐in will represent at least a full year without treatment. If the average duration of a dispensation is 1 month, then the same run‐in period of 12 months in most cases will correspond to 11–12 months without treatment. 3.1.2 Incidence, incidence rate and proportion The incidence, the number of new cases in a defined population, is often presented as a rate or a proportion. In incidence rate, the denominator is the aggregated study time contributed by each studied individual (actual person‐time). The denominator in incidence proportion (also called the cumulative incidence) is the population at risk at the beginning of a time interval, for instance, a calendar year. For incidence proportions, individuals that emigrate or die during the studied interval will still contribute to the denominator for the entire interval. Thus, all other things being equal, the incidence proportion will be lower than the incidence rate in a population with high mortality, such as the elderly population at risk, if defined as the population at the beginning of a time interval. With a high level of immigration, the incidence proportion, all other things being equal, will be higher than the incidence rate if immigrants are not censored. If censoring for immigration, each individual should be censored in the numerator and the denominator for the length of the run‐in after the date of immigration since a prevalent user otherwise would be potentially misclassified as a new case of drug treatment. The traditional definition of incidence rate and incidence proportion in epidemiology focuses on persons at risk as the denominator. In pharmacoepidemiology (whether or not a cohort in rate or a population in a proportion), that would represent only those not classified as prevalent users. However, in drug utilization studies reporting incidence proportion, the whole population is often the denominator (see also Section ). 3.1.3 Substance or condition The reason for prescribing the substance might be considered when studying new cases of drug use if the information is available. However, this information is not registered in large claims or population databases in most instances. Where reasons for prescribing are available, they are not always reliable due to external factors such as reimbursement rules or a heavy workload influencing reporting. Linking prescriptions to specific diagnoses for the same or earlier healthcare episodes is possible in limited situations but creates considerable methodological challenges. Each prescription might be made for several different reasons, which might change over time. A disease such as depression often fluctuates in severity over several years. A new prescription leading to a dispensation, that is, a case of recurrent treatment, can then represent either a repeat treatment for the same reason or treatment with the same substance or group of substances for other reasons. 3.1.4 New on a drug or new on a group of drugs New cases of drug use can relate to a single substance or a group of substances. However, the number of new cases of a group of substances does not equal the sum of the number of new users of each substance since a patient that starts treatment with one substance might have been treated with another substance belonging to the same group earlier. When placing both individual substances and groups of these substances into a simple two‐level model, four different situations can be described: New on a group regardless of the substance— NoG New on a specified substance, whether treated earlier with another substance in the group or not— NoS New on a specified substance and new on the group— NoS_and_NoG New on specified substance and not new on group— NoS_not_NoG This classification can be exemplified as an analysis with two levels for a group with four different substances (see Table and Figure ). During the studied period of 2009–2019, with 10 years of run‐in for dispensations during 2019, only four different statins were dispensed in Sweden (Table ). The effects of run‐in on different misclassifications There are several possible misclassifications when studying incidence. We have previously explored the concepts of a new case, first‐ever use and recurrent treatment and different types of misclassifications of incidence associated with varying the length of the run‐in period. A run‐in period (sometimes also called a washout period) is commonly used to differentiate between a dispensation indicating a new case of drug use and one representing a continuation of treatment. A short run‐in period will not differentiate well between first‐ever use and recurrence of treatment. With a long run‐in, a more significant fraction of new cases of drug use will represent first‐ever use. The run‐in consists of the total period without treatment and the assumed duration of the last dispensation. This pragmatic practice in register‐based studies will not be influenced by previous hoarding, change in dosage or a decision to end the treatment early (either by the prescriber or the patient). When comparing the incidence of drug use between countries and clinical settings, the assumed duration without treatment, and not the actual run‐in, must be considered since the average treatment duration of a dispensation varies between countries due to clinical practice and regulations. Suppose the average duration is 3–4 months as in Sweden due to the rules of the pharmaceutical benefit scheme. In that case, a 12‐month run‐in will usually represent a period between 8 and 12 months without treatment, while a 16‐month run‐in will represent at least a full year without treatment. If the average duration of a dispensation is 1 month, then the same run‐in period of 12 months in most cases will correspond to 11–12 months without treatment. Incidence, incidence rate and proportion The incidence, the number of new cases in a defined population, is often presented as a rate or a proportion. In incidence rate, the denominator is the aggregated study time contributed by each studied individual (actual person‐time). The denominator in incidence proportion (also called the cumulative incidence) is the population at risk at the beginning of a time interval, for instance, a calendar year. For incidence proportions, individuals that emigrate or die during the studied interval will still contribute to the denominator for the entire interval. Thus, all other things being equal, the incidence proportion will be lower than the incidence rate in a population with high mortality, such as the elderly population at risk, if defined as the population at the beginning of a time interval. With a high level of immigration, the incidence proportion, all other things being equal, will be higher than the incidence rate if immigrants are not censored. If censoring for immigration, each individual should be censored in the numerator and the denominator for the length of the run‐in after the date of immigration since a prevalent user otherwise would be potentially misclassified as a new case of drug treatment. The traditional definition of incidence rate and incidence proportion in epidemiology focuses on persons at risk as the denominator. In pharmacoepidemiology (whether or not a cohort in rate or a population in a proportion), that would represent only those not classified as prevalent users. However, in drug utilization studies reporting incidence proportion, the whole population is often the denominator (see also Section ). Substance or condition The reason for prescribing the substance might be considered when studying new cases of drug use if the information is available. However, this information is not registered in large claims or population databases in most instances. Where reasons for prescribing are available, they are not always reliable due to external factors such as reimbursement rules or a heavy workload influencing reporting. Linking prescriptions to specific diagnoses for the same or earlier healthcare episodes is possible in limited situations but creates considerable methodological challenges. Each prescription might be made for several different reasons, which might change over time. A disease such as depression often fluctuates in severity over several years. A new prescription leading to a dispensation, that is, a case of recurrent treatment, can then represent either a repeat treatment for the same reason or treatment with the same substance or group of substances for other reasons. New on a drug or new on a group of drugs New cases of drug use can relate to a single substance or a group of substances. However, the number of new cases of a group of substances does not equal the sum of the number of new users of each substance since a patient that starts treatment with one substance might have been treated with another substance belonging to the same group earlier. When placing both individual substances and groups of these substances into a simple two‐level model, four different situations can be described: New on a group regardless of the substance— NoG New on a specified substance, whether treated earlier with another substance in the group or not— NoS New on a specified substance and new on the group— NoS_and_NoG New on specified substance and not new on group— NoS_not_NoG This classification can be exemplified as an analysis with two levels for a group with four different substances (see Table and Figure ). During the studied period of 2009–2019, with 10 years of run‐in for dispensations during 2019, only four different statins were dispensed in Sweden (Table ). RESULTS Table shows the incidence proportion with the total population as the denominator in 2019 and a different run‐in for new on statins as a group (NoG); new on each statin whether treated earlier with another statin or not (NoS); new on each statin and new on statins (NoS_and_NoG); and new on each statin and not new on group (NoS_not_NoG). For a run‐in of 12 months, the incidence of new on statins (NoG) was 13.39 new cases per 1000 inhabitants, with a positive predictive value for first‐ever use of 63%. New on a specified statin and new on statins (NoS_and_NoG) varied between 9.99 new cases per 1000 inhabitants for atorvastatin and 0.06 for pravastatin. New on a specified statin, but not new on statins (NoS_not_NoG), varied between 0.70 for atorvastatin and 0.03 for pravastatin. In addition, 1.27 per 1000 inhabitants started treatment with any statin but had been treated with another statin during the run‐in (the difference between the sum of NoS_not_NoG for the individual substances and NoG). This corresponded to 9.5% of the individuals being new on statins (NoG). Extending the run‐in from 10 to 13 years (the longest possible run‐in for dispensations in 2019 in Sweden) had a minimal impact on the incidence proportion. For new on statins as a group, the decrease was less than 1% (from 8.40 to 8.34 new cases per 1000 inhabitants) in 2019. With increasing length of the run‐in period, the incidences for new on statins (NoG) and new both on a specified statin and on statins (NoS_and_NoG) decreased as expected, while their respective positive predictive value compared with a run‐in of 10 years increased. Concurrently, the incidence of new on a specified statin but not new on statins (NoS_not_NoG) increased since the observed time during which another statin might have been dispensed lengthened. DISCUSSION The focus of this study is the distinction between new cases of drug use in analyses for groups of substances (NoG) and the individual substances of the group defined in three different ways (NoS, NoS_and_NoG, NoS_not_NoG). The incidence and prevalence of statin use have been studied in different countries, periods and age groups. There is a significant variation in methodology between studies of statin incidence. Both incidence rates , , and incidence proportions , , , , are used when studying incidence. Individuals not at risk , , or the total population regardless of treatment status , were used as denominators for incidence proportions in the different studies. Studying only individuals at risk as a rate (per person‐time) or a proportion (during a defined period) describes the introduction of the drug among those not treated and thus available to become treated. Relating the new cases to all individuals is more straightforward in a study based on population registers. The latter approach is often the preferred choice for the incidence proportion based on register data since there is often no need to adjust for the prevalence in a simple time‐trend analysis. When comparing incidence proportion based on the total population between early and later phases of the introduction of a drug or between high‐ and low‐prevalence populations, it is advisable to assess the incidence in relation to the prevalence. With a commonly used group of substances such as statins, the difference in incidence between using persons at risk and the total population as the denominator will be significant if the prevalence is high. This is relevant for statins in Sweden, where the 1‐year prevalence in the whole population is 9.9%. This article calculates the reported incidence proportions of statins with the total population as the denominator. Correcting for a 1‐year prevalence of 9.9% would result in an approximately 11% higher incidence proportion for the non‐prevalent population. This could be further studied for different subpopulations. There is a wide variation in handling the length of the run‐in in reports of incidence treatment with statins. For statins, a fixed run‐in of 12 months is common, , , but it can vary between 6 months and several years. The run‐in length should be defined based on the clinical question and whether the focus is on all new cases, only first‐ever use, or recurrence of treatment. In several studies, the length of the run‐in is not fixed based on the index date. Instead, the first dispensation during a calendar year is considered a new case of statin prescription if the individual had no dispensation during the preceding calendar year. In these cases, the chosen run‐in varies between 12 and 24 months depending on the date of the first dispensation. , , , , Well‐defined incidence measures are needed not only for studies of drug utilization but also as a part of general drug statistics. Changes in incidence could be used as an indicator of possible future changes in prevalence but also for more sensitive studies of the effects of interventions through, for instance, interrupted time‐series analyses of incidence instead of the number of dispensations or defined daily doses. For statins, an increased incidence has been reported related in time to the results of the 4S trial in 1994 and to both 4S and the West of Scotland Coronary Prevention Study (WOSCOPS). Kildemoes et al studied the relationship between the incidence of statins according to indication in Denmark in the period of 1996–2009 and several external factors such as evolving clinical evidence, international guidelines on CVD prevention, national CVD guidelines and healthcare policies and statin costs. There is a need for further development of methodology and terminology for incidence rates or proportions when presented in studies of drug utilization or introduced as a measure in regular aggregated statistics of drug use. In addition, the estimated misclassification depending on the length of run‐in and which types of new cases are studied (all new cases, first‐ever use or recurrent treatment) should be presented. Table summarizes suggestions for presenting incidence for a drug utilization review. CONCLUSIONS When studying new cases of drug treatment, it is essential to differentiate between those new to both the substance and possible substitutes (NoS_and_NoG) and those new to the substance but who have been treated earlier with substitutes during the chosen run‐in (NoS_not_NoG). In order to allow for consistent comparisons over time and between populations, new incidence measures with validated methodology and descriptions of the degree of misclassification are needed both for scientific studies of drug utilization and when introducing incidence as a measure in aggregated drug statistics. The authors report no conflicts of interest according to the ICMJE Disclosure Form.
|
Plant associated protists—Untapped promising candidates for agrifood tools
|
ad757d94-33fd-4752-bc6e-7b387cf45d0e
|
10108267
|
Microbiology[mh]
|
Living plants are hosts of a complex microbiome, comprising of bacteria, fungi, archaea, protists and viruses that internally and externally colonize plant tissues (Hassani et al., ; Sapp et al., ; Trivedi et al., ). These beneficial, neutral and pathogenic plant‐associated microorganisms can significantly influence plant health and performance. The plant hosts and their associated microbiomes are suggested to form a ‘holobiont’, where complex plant–microbe interactions play crucial roles in regulating and promoting plant growth, biogeochemical cycling, nutrient acquisition, fitness and protection, stress tolerance and disease suppression (Hassani et al., ; Liu et al., ). Plant associated microbiota, in some cases, even contribute more to plant protection and stress resistance than the defensive capacity of plant hosts (Hubbard et al., ). A holistic microbiome perspective to decipher the mechanisms that govern the assembly, interactions and functions of plant‐associated microbiota, therefore, is a prerequisite to facilitate translational research and develop microbiome‐based tools to enhance plant productivity and agricultural sustainability. A panoramic view of the plant microbiota cannot be complete without considering protists as a pivotal component. Bacteria dominate the plant microbiota, followed by fungi, while protists and other organisms (e.g., archaea, nematodes and other soil invertebrates) are less abundant, but they were shown to be crucial in plant health and performance (Leach et al., ; Chen et al., ). Bacteria and fungi in the rhizosphere are enriched by carbon sources stemming from root exudates of plants (via bottom‐up control), however, they are major microbial prey for protists and thus subject to top‐down control by protist consumers. Protists, representing the vast diversity of unicellular eukaryotes, function as consumers (main predators of bacteria, fungi and small animals), primary producers (important carbon fixers via photosynthesis), plant and animal parasites, and decomposers (Geisen et al., ). The contributions of protists to nutrient input, organic matter decomposition and plant health have been previously reported (Bonkowski, ; Xiong et al., ; Geisen et al., ). Nonetheless, plant‐associated protists and their functions for plant hosts, compared to bacteria and fungi, have been largely underestimated (Gao et al., ; Trivedi et al., ). While the importance of plant beneficial microorganisms as promising agrifood tools to improve crop production and agricultural sustainability has been increasingly recognized (Chen et al., ; Hu et al., ), the plant–protist–microbe interactions in the above‐ and below‐ground systems are not well understood.
Although protists have the great potential to improve nutrition, suppress pathogens, promote plant growth, and function as bioindicators for plant health (Bonkowski, ; Xiong et al., ), protists colonizing inside plant tissues remain vastly untapped. Most studies on plant‐associated protists have focused on plant pathogenic or parasitic protists causing plant diseases (Dumack & Bonkowski, ) or the belowground protist community particularly in the rhizosphere (Fiore‐Donno et al., ). Bacteria and fungi are well‐characterized plant microbiome components with distinct community compositions across different compartments (e.g., phyllosphere, anthosphere, leaf and root endospheres, rhizosphere, and bulk soils) (Liu et al., ; Trivedi et al., ; Sun et al., ). Given the selective feeding preference of different protist groups for bacteria and fungi (Dumack et al., ), plant compartments at different developmental stages may harbour distinct taxonomic and functional diversity, community structure and functions of protists. In this article, we discuss known functions of protists and propose their potential roles and activities in different compartments of plants (Figure ; Table ). Phyllosphere‐associated protists Protists form key members of the plant microbiome and an external force shaping the plant microbiome assembly (Geisen et al., ; Gao et al., ), but the diversity and feedback of protists on phyllosphere microbiome remain surprisingly unknown. The occurrence of a protist strain Colpoda cucullus in leaves and stems in the 1970 s is one of early findings about the phyllosphere‐associated protists (Bamforth, ). Protists, especially the phylum Cecozoa consumers, have been recently identified in the model plant Arabidopsis thaliana (Sapp et al., ), sorghum (Sun et al., ), grasses, legumes and forbs (Flues et al., ; Flues et al., ), with the ability to improve plant growth and biomass. The phyllosphere, a habitat of various phages, prokaryotes, protists, fungi, and visiting insects (e.g., bees, butterflies and herbivores), is supposed to regulated by their complex trophic interactions under direct impacts of environmental changes. Protists shape the community composition and activities of bacteria and fungi through selective predation (Bonkowski, ; Gao et al., ). Notably, the selective predation of protists triggers distinct bacterial strains to produce antimicrobials, such as 2,4‐diacetylphloroglucinol (DAPG) and pyrrolnitrin (Jousset et al., ), or violacein (Matz et al., ), which has been recorded in the interaction between one or a few model protist and bacterial species under in vitro conditions. Hence, protists may stimulate bacteria or fungi to excrete toxic metabolites to protect plants from air‐borne pathogens or herbivores. Furthermore, protists can potentially select beneficial traits of microbes through (i) promoting phytohormone‐producing bacteria and ultimately enhancing plant fitness and development; and (ii) regulating the metabolic and functional profiles of bacterial community in the phyllosphere (Figure ). Some first evidence about phytohormone stimulation of protists has been found in plant rhizosphere, and their beneficial effects on plant hormones in the phyllosphere are a fertile area to discover. Recent studies have indicated that bacterivorous amoebae promoted bacteria producing essential phytohormones (auxin and cytokinin) in the plant rhizosphere though protists alone cannot produce plant hormones (Bonkowski & Brandt, ; Krome et al., ). Flues et al. ( ) revealed that, through a shotgun metagenomic sequencing, the predation of leaf‐associated protists Cercomonas and Paracercomonas strains (Cercozoa) dramatically influenced the taxonomic composition and metabolic functions of leaf‐associated bacterial community under in vitro conditions, suggesting the strong regulation of protists on the activities and functions of bacteria in the phyllosphere. Many other representatives of leaf‐associated Cercozoan consumers ( Rhogostoma spp.) were found to feed on fungi (here are yeasts) and algae in the phyllosphere of A. thaliana , and this grazing activity indicated crucial effects of protists on a wide range of microbes in the phyllosphere. Plants are not passively benefited by microorganisms but may proactively use the strategy ‘cry for help’ to recruit beneficial microorganisms to protect themselves under the abiotic (e.g., drought or high temperature) and biotic stresses (e.g., pathogens or herbivores). The underlying mechanisms and recruited microorganisms of this strategy, however, are unclear and probably distinct across plant compartments. Strikingly, a board spectrum of bacteria and fungi (e.g., yeasts) inhabit the anthosphere (i.e., flowers and surrounding zones), especially nectar, pollen (Vannette et al., ; Schaeffer et al., ) and flower surface (Ushio et al., ; Arunkumar et al., ), which significantly influence flower‐pollinator interactions, plant reproduction and yield. Due to the diverse microbes transmitted from various sources, flowers are potentially dynamic hubs of microbes and pollinators. However, the diversity and roles of protists in the anthosphere are far from being fully elucidated. Moreover, endophytic protists colonize root and leaf and stem endosphere, where their interplay with plant hosts and other microbes can possibly influence plant hormones, defensive systems and nutrient translocation to every plant tissue. The stimulation of uptake and translocation of nitrogen from rhizosphere soils, plant roots to shoots by protists were reported in wheat plants (Clarholm, ; Henkes et al., ). Notably, the amoebae Acanthamoeba castellanii promoted the phytohormone production (auxins and cytokinin) of bacteria in the phyllosphere of cress ( Lepidium sativum L.) and A. thaliana (Krome et al., ). Most recent studies have attempted to characterize the compositions of protists in the plant microbiome (Dumack et al., ; Sun et al., ), hence further insights into the multitrophic interactions of protists with plants, microbes, air‐borne pathogens and insects in the phyllosphere are required. Rhizosphere‐associated protists In contrast to other plant compartments, protists in the rhizosphere have received more attention with growing evidence for their crucial roles in (i) plant health and disease control (Xiong et al., ), (ii) nutrient cycling (Clarholm, ; Bonkowski, ), and (iii) plant hormones and growth (Bonkowski & Brandt, ). Many bacterial and fungal taxa are well‐known producers of antibiotics and toxic metabolites (Hutchings et al., ). The selective predation or even the presence of protists can trigger bacteria to produce specific antibiotics as weapons to kill or avoid protists through species‐specific response (Nguyen et al., ). For instance, Pseudomonas fluorescens strain SS101 and Pseudomonas fluorescens strain SBW25 produced antibiotics massetolide and viscosin, respectively, in response to the same bacterivorous amoeba Naegleria americana C1 (Mazzola et al., ; Song et al., ). Fungi also emit antimicrobial volatiles to inhibit the bacterial motility or growth upon bacterial–fungal interaction (Rybakova et al., ; Bruisson et al., ). However, there is a paucity of effects of protists on the antibiotic excretion of fungi. The antibiotics produced by bacteria and fungi are considered as a defensive mechanism to toxify not only protists but also other microbial competitors in natural habitats (Święciło, ; Cruz‐Loya et al., ). Through this effect, when plants ‘cry for help’ by sending signals via root exudates (volatiles, organic acids or others) under pathogen or pest attacks (Liu et al., ), protists may respond by recruiting antibiotic producers to produce antimicrobials to inhibit pathogens or pests for plant protection. However, this strategy of plants and their associations with protists are still elusive questions. As primary microbial predators, protists can also directly consume bacterial and fungal pathogens. The consumptive effect of protists, typically protistan consumers, can cause fatality of a wide range of bacterial and fungal strains (Chakraborty et al., ; Dumack et al., ). In the rhizosphere of A. thaliana , the diversity and abundance of specific bacteria taxa, especially Betaproteobacteria and Firmicutes , were significantly decreased under the predation of soil amoeba A. castellanii . Bahroun et al. ( ) reported that bacterivorous protists alone and their synergistic interactions with bacteria reduced disease severity caused by a fungal pathogen Fusarium solani S55 and improved root length and plant growth of faba bean ( Vicia faba ) seedlings (Table ). Recent studies have indicated important links of protists to soil‐borne disease control and plant health in the rhizosphere of tomatoes (Xiong et al., ), cucumber (Guo et al., ) and banana plants (Guo et al., ). In particular, numerous Cercozoan and Amoebozoan species can function as important indicators for the health of tomato plants. Guo et al. ( ) also revealed that the protistan consumer Cercomonas lenta strain ECO‐P‐01 substantially suppressed the density of the fungal pathogen Fusarium oxysporum and increased the disease‐suppressive bacteria Bacillus in the rhizosphere, and subsequently improved banana plant growth and yield. Hence, a comprehensive understanding of protists in the rhizosphere and other plant compartments will promote their applications in plant disease suppression. Protists are also pivotal contributors to nutrient cycling in the rhizosphere (Table ). Nutrients are temporarily locked up in rhizosphere bacterial and fungal biomass and can be translocated to protists as microbial feeders or unlocked by the protists' predation and eventually channelled to benefit plants, which is called ‘the microbial loop’ (Clarholm, ). Protists directly release nitrogen and carbon after prey digestion or form a symbiotic relationship with beneficial fungal or bacterial taxa in cycling essential nutrients (nitrogen, carbon, iron, silicon or phosphorous) (Geisen et al., ; Gao et al., ), enhancing soil nutrient input and fertility for nurturing plant growth and rhizo‐microbiome. The great contribution of protists to nutrient cycling has long been recognized since 1985, when Clarholm demonstrated the increasing nitrogen uptake to 75% by plants under the inoculation of protists. The presence of protists promoted plant phosphorus and calcium uptake and translocation to stems or needles, as well as modulated nutrient concentrations (nitrogen, phosphorus, carbon to nitrogen ratio (C/N ratio), calcium and magnesium) (Bonkowski et al., ). Consequently, this regulation of protists led to the improvement of root growth and architecture as well as biomass of different compartments (shoots, roots and needles) of spruce seedlings. A similar beneficial effect of protists was found in rice plants ( Oryza sativa L.) (Henkes et al., ). Moreover, phototrophic protists contribute to carbon cycling as carbon fixers via photosynthesis (Schmidt et al., ), providing nonnegligible carbon and oxygen inputs to rhizosphere organisms and the basis for soil life, but their capacity for carbon sequestration is still unknown. Notably, benefits of protists to plant nutrition are more efficient when forming symbiosis with other microbes, particularly arbuscular mycorrhizal fungi (AMF) that enhance plant nitrogen and phosphorus uptake. Protists might facilitate nutrient acquisition, mineralization and translocation of AMF (Zuccaro et al., ; Henkes et al., ), and promote the growth and activities of nitrifying bacteria and other bacteria (Bonkowski, ), suggesting intimate protist–microbe links in plant benefits. For instance, Bonkowski et al. ( ) indicated that the joint effects of protists and mycorrhiza significantly enhanced the phosphorous uptake from roots to stems, as well as affected rhizosphere microbes and essential plant nutrients (carbon, phosphorous and trace elements), which maximized the biomass of different spruce compartments (shoots, stems and needles). Protists, in rumen ecosystems, were detected to have positive links to archaea (Solomon et al., ), which are key players in the global nitrogen cycle (Hu et al., ). However, the contribution of archaea to plant hosts and protist‐archaea relationships in nutrient cycle is an intriguing unexplored topic. Upon nutrient shortage, beneficial protist–microbe interactions may be boosted by the plant strategy ‘cry for help’, and we cannot have a full understanding of protists' roles if ignoring their contributions. Protists can also significantly influence plant hormones and development through regulating the community structure and activities of plant‐hormone producing rhizobacteria. Plant growth‐promoting phytohormones auxins (indolyl‐3‐acetic acid (IAA)) were found in the inoculation of the most studied model species A. castellanii in bacterial cultures (Nikoljuk, ) and rhizosphere of watercress seedlings ( L. sativum ) by modulating phytohormone‐producing bacteria or rhizobacterial community (Bonkowski & Brandt, ). While root systems are paramount apparatus to take up and allocate water and nutrients to every plant tissue for plant growth and environmental adaptation, protists, such as A. castellanii , can trigger phytohormone production (auxins and cytokinin) of bacteria, resulting in the enhancement of root growth and architecture and development of plants L. sativum and A. thaliana , more than bacteria standalone (Bonkowski & Brandt, ; Krome et al., ). Interestingly, the regulation on phytohormone‐producing bacteria strengthens root growth and architecture of many crops, including watercress, pea and cress (Table ). Hence, it is evident that soil‐ or rhizosphere‐associated protists can significantly influence both the above‐ and below‐ground compartments of the plant hosts. While microbes are acknowledged as important hormone producers of plants (Nakano et al., ), more explorations of protists' roles in the inter‐organismal phytohormone networks between plant hosts, protists and other microbes are critical to deploy beneficial protists in improving plant immunity and development. Protists in bulk soils In bulk soils, the diversity of protists is higher than that in the rhizosphere, root and litter (Ceja‐Navarro et al., ; Fiore‐Donno et al., ), which indicates that soil protists function as a ‘microbial seed bank’ for plant support and soil functions, as well as the selection of plants for protist communities. Moreover, protists can influence elemental cycles, soil fertility and soil microbiome by (i) steering the composition and activities of beneficial microorganisms (e.g., AMF or nitrifying microbes), (ii) excreting nitrogen or carbon sources after the predation and consumption of prey in bulk soils, and (iii) mediating the community composition and interactions of soil microbiome via facilitative, symbiotic or predatory relationships between protists and other microbes. The positive relationships between protists and bacteria have been identified in soil ecosystems (Nguyen et al., ), but further research is required to disentangle mechanisms for the interplay and roles of protists, fungi, bacteria, archaea, and viruses in plant‐associated microbiota. Beside the aforementioned benefits, parasitic protists have negative effects on plants, as pathogens have been more thoroughly characterized than neutral and beneficial protists (Dumack & Bonkowski, ). A large number of non‐pathogenic endophytic protists inhabit plant tissues and across rainforest soil ecosystems (Mahé et al., ), but their identity and functions on plant hosts remain unknown. Given their high abundance in natural habitats, we suppose that endophytic protists have unexplored benefits to plant hosts.
Protists form key members of the plant microbiome and an external force shaping the plant microbiome assembly (Geisen et al., ; Gao et al., ), but the diversity and feedback of protists on phyllosphere microbiome remain surprisingly unknown. The occurrence of a protist strain Colpoda cucullus in leaves and stems in the 1970 s is one of early findings about the phyllosphere‐associated protists (Bamforth, ). Protists, especially the phylum Cecozoa consumers, have been recently identified in the model plant Arabidopsis thaliana (Sapp et al., ), sorghum (Sun et al., ), grasses, legumes and forbs (Flues et al., ; Flues et al., ), with the ability to improve plant growth and biomass. The phyllosphere, a habitat of various phages, prokaryotes, protists, fungi, and visiting insects (e.g., bees, butterflies and herbivores), is supposed to regulated by their complex trophic interactions under direct impacts of environmental changes. Protists shape the community composition and activities of bacteria and fungi through selective predation (Bonkowski, ; Gao et al., ). Notably, the selective predation of protists triggers distinct bacterial strains to produce antimicrobials, such as 2,4‐diacetylphloroglucinol (DAPG) and pyrrolnitrin (Jousset et al., ), or violacein (Matz et al., ), which has been recorded in the interaction between one or a few model protist and bacterial species under in vitro conditions. Hence, protists may stimulate bacteria or fungi to excrete toxic metabolites to protect plants from air‐borne pathogens or herbivores. Furthermore, protists can potentially select beneficial traits of microbes through (i) promoting phytohormone‐producing bacteria and ultimately enhancing plant fitness and development; and (ii) regulating the metabolic and functional profiles of bacterial community in the phyllosphere (Figure ). Some first evidence about phytohormone stimulation of protists has been found in plant rhizosphere, and their beneficial effects on plant hormones in the phyllosphere are a fertile area to discover. Recent studies have indicated that bacterivorous amoebae promoted bacteria producing essential phytohormones (auxin and cytokinin) in the plant rhizosphere though protists alone cannot produce plant hormones (Bonkowski & Brandt, ; Krome et al., ). Flues et al. ( ) revealed that, through a shotgun metagenomic sequencing, the predation of leaf‐associated protists Cercomonas and Paracercomonas strains (Cercozoa) dramatically influenced the taxonomic composition and metabolic functions of leaf‐associated bacterial community under in vitro conditions, suggesting the strong regulation of protists on the activities and functions of bacteria in the phyllosphere. Many other representatives of leaf‐associated Cercozoan consumers ( Rhogostoma spp.) were found to feed on fungi (here are yeasts) and algae in the phyllosphere of A. thaliana , and this grazing activity indicated crucial effects of protists on a wide range of microbes in the phyllosphere. Plants are not passively benefited by microorganisms but may proactively use the strategy ‘cry for help’ to recruit beneficial microorganisms to protect themselves under the abiotic (e.g., drought or high temperature) and biotic stresses (e.g., pathogens or herbivores). The underlying mechanisms and recruited microorganisms of this strategy, however, are unclear and probably distinct across plant compartments. Strikingly, a board spectrum of bacteria and fungi (e.g., yeasts) inhabit the anthosphere (i.e., flowers and surrounding zones), especially nectar, pollen (Vannette et al., ; Schaeffer et al., ) and flower surface (Ushio et al., ; Arunkumar et al., ), which significantly influence flower‐pollinator interactions, plant reproduction and yield. Due to the diverse microbes transmitted from various sources, flowers are potentially dynamic hubs of microbes and pollinators. However, the diversity and roles of protists in the anthosphere are far from being fully elucidated. Moreover, endophytic protists colonize root and leaf and stem endosphere, where their interplay with plant hosts and other microbes can possibly influence plant hormones, defensive systems and nutrient translocation to every plant tissue. The stimulation of uptake and translocation of nitrogen from rhizosphere soils, plant roots to shoots by protists were reported in wheat plants (Clarholm, ; Henkes et al., ). Notably, the amoebae Acanthamoeba castellanii promoted the phytohormone production (auxins and cytokinin) of bacteria in the phyllosphere of cress ( Lepidium sativum L.) and A. thaliana (Krome et al., ). Most recent studies have attempted to characterize the compositions of protists in the plant microbiome (Dumack et al., ; Sun et al., ), hence further insights into the multitrophic interactions of protists with plants, microbes, air‐borne pathogens and insects in the phyllosphere are required.
In contrast to other plant compartments, protists in the rhizosphere have received more attention with growing evidence for their crucial roles in (i) plant health and disease control (Xiong et al., ), (ii) nutrient cycling (Clarholm, ; Bonkowski, ), and (iii) plant hormones and growth (Bonkowski & Brandt, ). Many bacterial and fungal taxa are well‐known producers of antibiotics and toxic metabolites (Hutchings et al., ). The selective predation or even the presence of protists can trigger bacteria to produce specific antibiotics as weapons to kill or avoid protists through species‐specific response (Nguyen et al., ). For instance, Pseudomonas fluorescens strain SS101 and Pseudomonas fluorescens strain SBW25 produced antibiotics massetolide and viscosin, respectively, in response to the same bacterivorous amoeba Naegleria americana C1 (Mazzola et al., ; Song et al., ). Fungi also emit antimicrobial volatiles to inhibit the bacterial motility or growth upon bacterial–fungal interaction (Rybakova et al., ; Bruisson et al., ). However, there is a paucity of effects of protists on the antibiotic excretion of fungi. The antibiotics produced by bacteria and fungi are considered as a defensive mechanism to toxify not only protists but also other microbial competitors in natural habitats (Święciło, ; Cruz‐Loya et al., ). Through this effect, when plants ‘cry for help’ by sending signals via root exudates (volatiles, organic acids or others) under pathogen or pest attacks (Liu et al., ), protists may respond by recruiting antibiotic producers to produce antimicrobials to inhibit pathogens or pests for plant protection. However, this strategy of plants and their associations with protists are still elusive questions. As primary microbial predators, protists can also directly consume bacterial and fungal pathogens. The consumptive effect of protists, typically protistan consumers, can cause fatality of a wide range of bacterial and fungal strains (Chakraborty et al., ; Dumack et al., ). In the rhizosphere of A. thaliana , the diversity and abundance of specific bacteria taxa, especially Betaproteobacteria and Firmicutes , were significantly decreased under the predation of soil amoeba A. castellanii . Bahroun et al. ( ) reported that bacterivorous protists alone and their synergistic interactions with bacteria reduced disease severity caused by a fungal pathogen Fusarium solani S55 and improved root length and plant growth of faba bean ( Vicia faba ) seedlings (Table ). Recent studies have indicated important links of protists to soil‐borne disease control and plant health in the rhizosphere of tomatoes (Xiong et al., ), cucumber (Guo et al., ) and banana plants (Guo et al., ). In particular, numerous Cercozoan and Amoebozoan species can function as important indicators for the health of tomato plants. Guo et al. ( ) also revealed that the protistan consumer Cercomonas lenta strain ECO‐P‐01 substantially suppressed the density of the fungal pathogen Fusarium oxysporum and increased the disease‐suppressive bacteria Bacillus in the rhizosphere, and subsequently improved banana plant growth and yield. Hence, a comprehensive understanding of protists in the rhizosphere and other plant compartments will promote their applications in plant disease suppression. Protists are also pivotal contributors to nutrient cycling in the rhizosphere (Table ). Nutrients are temporarily locked up in rhizosphere bacterial and fungal biomass and can be translocated to protists as microbial feeders or unlocked by the protists' predation and eventually channelled to benefit plants, which is called ‘the microbial loop’ (Clarholm, ). Protists directly release nitrogen and carbon after prey digestion or form a symbiotic relationship with beneficial fungal or bacterial taxa in cycling essential nutrients (nitrogen, carbon, iron, silicon or phosphorous) (Geisen et al., ; Gao et al., ), enhancing soil nutrient input and fertility for nurturing plant growth and rhizo‐microbiome. The great contribution of protists to nutrient cycling has long been recognized since 1985, when Clarholm demonstrated the increasing nitrogen uptake to 75% by plants under the inoculation of protists. The presence of protists promoted plant phosphorus and calcium uptake and translocation to stems or needles, as well as modulated nutrient concentrations (nitrogen, phosphorus, carbon to nitrogen ratio (C/N ratio), calcium and magnesium) (Bonkowski et al., ). Consequently, this regulation of protists led to the improvement of root growth and architecture as well as biomass of different compartments (shoots, roots and needles) of spruce seedlings. A similar beneficial effect of protists was found in rice plants ( Oryza sativa L.) (Henkes et al., ). Moreover, phototrophic protists contribute to carbon cycling as carbon fixers via photosynthesis (Schmidt et al., ), providing nonnegligible carbon and oxygen inputs to rhizosphere organisms and the basis for soil life, but their capacity for carbon sequestration is still unknown. Notably, benefits of protists to plant nutrition are more efficient when forming symbiosis with other microbes, particularly arbuscular mycorrhizal fungi (AMF) that enhance plant nitrogen and phosphorus uptake. Protists might facilitate nutrient acquisition, mineralization and translocation of AMF (Zuccaro et al., ; Henkes et al., ), and promote the growth and activities of nitrifying bacteria and other bacteria (Bonkowski, ), suggesting intimate protist–microbe links in plant benefits. For instance, Bonkowski et al. ( ) indicated that the joint effects of protists and mycorrhiza significantly enhanced the phosphorous uptake from roots to stems, as well as affected rhizosphere microbes and essential plant nutrients (carbon, phosphorous and trace elements), which maximized the biomass of different spruce compartments (shoots, stems and needles). Protists, in rumen ecosystems, were detected to have positive links to archaea (Solomon et al., ), which are key players in the global nitrogen cycle (Hu et al., ). However, the contribution of archaea to plant hosts and protist‐archaea relationships in nutrient cycle is an intriguing unexplored topic. Upon nutrient shortage, beneficial protist–microbe interactions may be boosted by the plant strategy ‘cry for help’, and we cannot have a full understanding of protists' roles if ignoring their contributions. Protists can also significantly influence plant hormones and development through regulating the community structure and activities of plant‐hormone producing rhizobacteria. Plant growth‐promoting phytohormones auxins (indolyl‐3‐acetic acid (IAA)) were found in the inoculation of the most studied model species A. castellanii in bacterial cultures (Nikoljuk, ) and rhizosphere of watercress seedlings ( L. sativum ) by modulating phytohormone‐producing bacteria or rhizobacterial community (Bonkowski & Brandt, ). While root systems are paramount apparatus to take up and allocate water and nutrients to every plant tissue for plant growth and environmental adaptation, protists, such as A. castellanii , can trigger phytohormone production (auxins and cytokinin) of bacteria, resulting in the enhancement of root growth and architecture and development of plants L. sativum and A. thaliana , more than bacteria standalone (Bonkowski & Brandt, ; Krome et al., ). Interestingly, the regulation on phytohormone‐producing bacteria strengthens root growth and architecture of many crops, including watercress, pea and cress (Table ). Hence, it is evident that soil‐ or rhizosphere‐associated protists can significantly influence both the above‐ and below‐ground compartments of the plant hosts. While microbes are acknowledged as important hormone producers of plants (Nakano et al., ), more explorations of protists' roles in the inter‐organismal phytohormone networks between plant hosts, protists and other microbes are critical to deploy beneficial protists in improving plant immunity and development.
In bulk soils, the diversity of protists is higher than that in the rhizosphere, root and litter (Ceja‐Navarro et al., ; Fiore‐Donno et al., ), which indicates that soil protists function as a ‘microbial seed bank’ for plant support and soil functions, as well as the selection of plants for protist communities. Moreover, protists can influence elemental cycles, soil fertility and soil microbiome by (i) steering the composition and activities of beneficial microorganisms (e.g., AMF or nitrifying microbes), (ii) excreting nitrogen or carbon sources after the predation and consumption of prey in bulk soils, and (iii) mediating the community composition and interactions of soil microbiome via facilitative, symbiotic or predatory relationships between protists and other microbes. The positive relationships between protists and bacteria have been identified in soil ecosystems (Nguyen et al., ), but further research is required to disentangle mechanisms for the interplay and roles of protists, fungi, bacteria, archaea, and viruses in plant‐associated microbiota. Beside the aforementioned benefits, parasitic protists have negative effects on plants, as pathogens have been more thoroughly characterized than neutral and beneficial protists (Dumack & Bonkowski, ). A large number of non‐pathogenic endophytic protists inhabit plant tissues and across rainforest soil ecosystems (Mahé et al., ), but their identity and functions on plant hosts remain unknown. Given their high abundance in natural habitats, we suppose that endophytic protists have unexplored benefits to plant hosts.
Protists alone or their interactions with other microbes are considered to play crucial roles in the plant holobiont. It is promising to develop protist‐based tools to enhance nutrient availability and plant growth as biofertilizers, to control plant disease infection and microbial functions as biocontrol agents, or to promote plant hormones and nutrient cycling activities and survival of plant beneficial microbes in modern agriculture. Compared to other plant microorganisms, the functions, signalling and feedbacks of protists in multi‐organismal (host–protist, protist–microbe, and protist–visiting insects) interactions with or without the infection of soil‐borne or air‐borne plant pathogens and pests are largely unexplored. A more comprehensive understanding of the molecular mechanisms and functions of plant–protist–microbe interactions will enable us to steer the activities and performance of microbes in the plant holobiont. Therefore, we propose and discuss future frameworks to generate a holistic view of plant‐associated protists and the manipulation and applications of protist‐based models in crop production, namely: (i) identification of key factors structuring the taxonomic and functional traits of plant‐associated protists as well as the core and keystone taxa of protists; (ii) isolation and selection of plant beneficial protists for various crops under different stresses; and (iii) establishment and applications of protist‐based synthetic communities (SynComs) to improve plant performance (Figure ). Firstly, the identification of key factors structuring the taxonomic and functional traits of plant‐associated protists is a crucial step. To date, most studies have characterized plant‐associated protists by conventional (microscopy‐based and direct counting) methods, quantitative PCR or amplicon sequencing. Identifying microbial eukaryotes with high throughput sequencing techniques, however, is not straight forward, since severe primer‐biases were identified in previous protist surveys (Lentendu et al., ; Hirakata et al., ). For instance, although many soils are known to be dominated by protists of the taxa Amoebozoa and Cercozoa, the primer‐based surveys constantly underestimate the importance of Amoebozoa (Bonkowski et al., ). Metatranscriptomics can overcome this issue as they do not rely on primers and, in accordance to what is found by morphological surveys, Amoebozoa may dominate in such datasets (Urich et al., ; Geisen et al., ). Furthermore, it is still difficult to estimate exact functioning of protists. Trait databases are helpful for the exploration of functioning in microbial eukaryotes (Dumack et al., ), but there is still a lack of a database covering all distinct protistan taxa. The characterization of protists in different plant compartments (including phyllosphere, leaf, stem and root endosphere, rhizosphere soil and bulk soil) in large‐scale field investigations is important to have full understandings about their taxonomic and functional diversity and community compositions for each plant species. The combination with co‐occurrence networks and statistical modellings will further disentangle the key drivers and principles shaping the protist community assembly and dynamics in plant microbiome, as well as build up a database of key protists that best predict plant performance parameters. Many persistent and abundant members of a specific host found across wide‐range habitats constitute a core plant microbiota, which carry essential genes to support plant fitness as well as play crucial roles in maintaining multiple functions and stability of the host microbiome (Shade & Stopnisek, ). Core bacterial taxa, for example, members of the orders Rhizobiales and Pseudomonadales, are reported to benefit plant fitness, growth and resilience under stresses (Trivedi et al., ). Notably, keystone taxa of protists, highly associated members regardless of their abundance, deserve special attention because they crucially affect community structure and functions (Banerjee et al., ). Therefore, the determination of core and keystone taxa of protists for major crops across different regions, along with core and keystone taxa of bacteria and fungi (Banerjee et al., ; Trivedi et al., ), will leverage our capacity to manipulate plant microbial activities and design optimal SynCom models for maximizing growth and yields of specific crops. Crucial for this is a coupling of high throughput sequencing and metatranscriptomic approaches with subsequent culture attempts, first to identify core symbionts and then to provide them as a culture to research. Secondly, to incorporate protists into the agrifood toolbox, it is paramount to establish a collection of plant beneficial protists for various crops under different stress conditions. It is promising to tailor the high‐throughput isolation approach which has proved to be effective in isolating bacterial strains from root microbiota (Zhang et al., ), to characterize and isolate protists from various plant tissues (e.g., leaf and stem endophytes). In the first selection step, the data integration of plant protists from large‐scale investigations with findings of the high‐throughput protist isolation will be a crucial reference for selection and nomination of promising protist species to establish protist‐based SynComs for improving plant performance. In the second selection step, the selected protist species, alone or in a subset of core and keystone protists, can be preliminarily tested for their capacity in performing desired functions, such as suppression of common fungal pathogens and resistance to abiotic stresses, in short‐term controlled laboratory conditions. Core and keystone taxa of protists conferring desired plant‐beneficial functions will be considered as key members of the protist‐based SynComs. However, other core and keystone taxa of protists, which do not have the desired features in the preliminary tests, should not be discarded because their performance may be boosted in facultative or antagonistic interactions with specific microbes. Thirdly, protist isolates alone or combined with beneficial bacterial or fungal strains are used to construct different protist‐based synthetic communities to improve plant health and performance. These SynComs can mimic biological interactions (e.g., competition, predation or symbiosis) in natural settings, and the diversification of trophic interactions (e.g., bottom‐up and top‐down controls or trophic cascade) will boost microbes to produce crucial products (e.g., phytohormones, antibiotics and other compounds), and consequently stabilize the phytobiome and promote crop development. We propose to apply the protist‐based SynCom models for monoculture or mixture of plant species. Neighbouring crops in the plant mixture can increase interspecific interactions and functions of beneficial microbes, and plant uptake of essential resources (nutrients or water), with positive consequences for disease suppression and plant growth (Jing et al., ). All SynComs will be assessed for their efficacy in benefiting plant fitness and growth, nutrient cycling and uptake, disease controls and stress tolerance for each plant species. Some protist‐based products, for example, have hit the market and been applied in crop production, such as a protist species Nosema Iocustae as a biological control agent in over 90 species of grasshoppers, locusts, and crickets in the United States ( https://www.gardeninsects.com/grasshopperbait.asp ); and 19 biofertilizers developed from a mixture of beneficial protists, bacteria and fungi enhancing nutrients, plant growth and resilience for a variety of crops in Netherlands ( https://ecostyle.nl/zoeken?query=protozoa ). In this step, the integration of multiple ‘omics’ techniques (including metatranscriptomics, metaproteomics and metabolomics) with machine learning and statistical modelling, rather than amplicon sequencing or one single method, will enable us to characterize the panoramic profile of cellular activities, functions, molecular signalling and metabolites of protists, plant host and other organisms in the plant holobiont. The metatranscriptomics elucidate the microbial identification, gene expression and functional profile of protists and other organisms, while metaproteomics (e.g., matrix‐assisted laser desorption‐ionization time of flight (TOF)/TOF‐mass spectrometry (MS)) is powerful to unravel protein identification, quantification and origin (Wang et al., ). Metabolomics can detect and quantify untargeted primary metabolites (e.g., organic acids, amino acids, and others), such as by gas chromatography (GC)‐MS, and secondary metabolites produced by plant hosts, protists and other associated microorganisms, such as by liquid chromatography (LC)‐high‐resolution MS (LC‐HRMS) (Weckwerth, ; Sumner et al., ). The application of machine learning and statistical modelling with transcriptomic, proteomic and metabolomic data can maximize our capacity to identify and predict compound composition, metabolic pathway, functional traits and activities of protists with the plant hosts and other organisms. For instance, the software METABOLIC is an advanced toolkit to profile metabolic and biogeochemical traits, and functional networks in microbial communities (Zhou et al., ). This integrated strategy will help us to explore and confirm the multifunctionality and benefits of plant‐associated protists to plant hosts and understand the complex trophic interactions within the plant–soil system in a holistic manner. The development of protist‐based SynCom models as agrifood tools is potential to improve agricultural production. It is obvious that there will not be ‘one size fits all’ SynComs (Vorholt et al., ), hence the construction of protist‐based solutions should target species‐ or tissue‐specific SynComs for distinct plants at different developmental stages to optimize the efficacy in plant productivity, like commercial fertilizers or pesticides. Given the above‐mentioned contributions of protists to plants, we advocate for future efforts to target the development of beneficial protists as novel and sustainable biofertilizers for improving plant growth and productivity, biological control agents for enhancing pathogenic defence, and biological stimulation strategies for boosting microbial activities and plant‐promoting traits for plant health and performance. Biofertilizers are gaining interests across the agricultural sector, due to the recent rapid increase of fossil fuel price and fertilizer costs, the inoculation and formulation of protists into biofertilizers will be powerful to unlock natural nutrient sources or inorganic and organic fertilizers in soils. Nevertheless, the lack of sufficient knowledge about the roles of protists in soil ecology limits our ability to manage soil health for sustaining crop production. Future research on unravelling the functions and strategies of beneficial plant‐associated protists is necessary to enhance plant health and production, thus reducing the application of fungicide and pesticides.
Protists are key members of the plant‐associated microbiota. It is evident that their contributions alone or in combination with other microorganisms significantly benefit plants in not only a single but multiple aspects, such as plant nutrition, disease control, plant health and performance. The interplay between the hosts and associated protists in different plant and soil compartments is complex and still far from being fully elucidated. Therefore, there are calls to disentangle driving factors and key roles of protists in ecological processes and agricultural productivity, which can provide new insights into the manipulation and applications of beneficial protists as biofertilizers and other agricultural products in benefiting crop health and productivity and ultimately sustaining healthy agricultural systems. Although the protists' benefits to plants as bio‐fertilizers or biocontrol agents have been aware, we still have limitations on approaches studying the identity and functions of protists, as well as challenges in how to engineer efficient protist‐based SynComs and how to maintain their persistence and efficacy in crop production. The innovation of plant‐beneficial products from protists is a daunting task, but it will pay the ways to accelerate the development of protist‐based products and to innovate novel mobile molecular technologies to quickly assess and monitor the activities and community composition of the applied beneficial microbiome in smart‐farming systems and agricultural fields in the near future. We also highlight some important questions about plant‐associated protists in the plant holobiont: (1) What are biochemical or molecular signals that protists recruit or pay partnerships with other plant‐associated microorganisms (e.g., bacteria or fungi) in benefiting plant hosts, as well as interact with insects (e.g., pollinators or ants) or herbivores? (2) What individuals or groups of protists are recruited by the plant hosts? (3) How do protists at different plant compartments respond to the strategy ‘cry for help’ of plants under biotic (pathogen or pest infection) or abiotic (e.g., low/high temperature, drought or salinity) stresses? (4) What are interactions between plant hosts and plant or soil microbiome under impacts of climate change? (5) Beside plant roots, do other plant tissues (e.g., leaf or stem) use a similar strategy ‘cry for help’ to interact with or recruit beneficial protists and other microorganisms for dealing with different stresses? (6) How can SynComs and other protist‐based tools be safely introduced and applied to recipient soils and crops? (7) How can we estimate and maintain the efficiency and persistence of protist‐based SynComs and other tools in enhancing plant growth and productivity in recipient soils and crops? The answer to these questions is a challenge but also a great opportunity to leverage our capacity to deploy plant‐associated microbiota to improve crop health and performance. No single method but the integrated advanced approaches can help us fully understand complex interactions in the plant holobiont.
Bao‐Anh Thi Nguyen: Conceptualization; literature investigation; writing – original draft; writing – review and editing. Kenneth Dumack: writing – review and editing. Pankaj Trivedi: Writing – review and editing. Zahra Islam: Writing – review and editing. Hang‐Wei Hu: Conceptualization; funding acquisition; supervision; writing – review and editing.
We declare that we have no competing interests.
|
Microbiome resilience of Amazonian forests: Agroforest divergence to bacteria and secondary forest succession convergence to fungi
|
ec310bdc-4897-40d3-a693-150d227cfc0a
|
10108277
|
Microbiology[mh]
|
INTRODUCTION Over the past decade Amazonian rainforest has been converted to commodity production (pasture and soybean) at a rate of 6.54 M hectares per year (Kim et al., ). To circumvent the limitations presented by nutrient‐poor soils, many farmers adopt slash‐and‐burn practices, which use fire to quickly mineralize nutrients stored in the plant biomass and make them available for subsequent crops. However, the soils of the humid tropics are particularly vulnerable to degradation as the warm and humid environment promotes rapid organic matter decomposition and mineralization, nutrient loss caused by leaching and runoff (Markewitz et al., ), and gaseous nitrogen losses (Brookshire et al., ). Production thus declines rapidly after burning, causing farmers to abandon such land and move to a different plot of the forest, leading to further deforestation. Ultimately, repeated slash‐and‐burn cycles and shortened fallow periods (Lawrence et al., ) lead to reduced agronomic productivity (Runyan et al., ; Styger et al., ), thereby exacerbating rural poverty (Jakovac et al., ; Satyam Verma, ). Agroforestry has been proposed a sustainable alternative to slash‐and‐burn shifting cultivation in the tropics. The core principle of agroforestry systems (AFS) lies in combining trees with crops, and/or animals in the same plot of land (a multistrata system) (Atangana et al., ) to mimic plant succession in the spontaneous forest (Cezar et al., ; Young, ), while including crop production (Cardozo et al., , ). When appropriately managed, agroforestry practices improve the topsoil physico‐chemical properties by increasing phosphorus and potassium contents (Pinho et al., ), maintain soil organic matter content (Leite et al., ), and promote nutrient cycling via nutrient pumping and safety net mechanisms (Seneviratne et al., ), which all strictly depend on ecosystem services delivered by the soil microbes (Wagg et al., ). Therefore, integrating the soil microbial community with the aboveground biomass and soil factors provides a fuller overview of the impacts of different management practices on the aboveground–belowground interactions in AFS. Intentionally or unintentionally, AFS are designed to spatially, physically, and temporally optimize resource use by maximizing the positive interactions, and minimizing the negative interactions between plants and soil subsystems (aboveground–belowground interactions). However, compared with the spontaneous forests, agroforestry weakens the intensity of aboveground (plant)–belowground (soil chemical factors) (Leite et al., ). Thus, while the non‐sustainable land use intensification in slash‐and‐burn practices clearly has negative impacts on soil nutrient recycling, above‐ and belowground biodiversity and ecosystem functioning and stability (Thiele‐Bruhn et al., ), intensely managed AFS may likewise interfere in the aboveground–belowground linkages that impact ecosystem functioning, especially nutrient cycling. The challenge in investigating the aboveground–belowground interactions in an agroforestry system begins with the multiple components or subsystems that play a major role in determining system functioning. Research on Amazonian forests to date has generally focused on tree–crop interactions (González & Kröger, ; Maezumi et al., ; Pinho et al., , ; Stabile et al., ) or plant–animal interactions, and few studies in other tropical regions (Africa, Central America, and Asia) considered the impact of agroforestry practices on soil microorganisms (Liu et al., ; Schneider et al., ; Wemheuer et al., ). To our knowledge, no studies have considered the interaction between above‐ and belowground in a holistic approach including soil microbiome, the main players in soil nutrient cycling, in AFS and compared these systems with the secondary succession and mature Amazonian rainforests. Hence, here we investigated the capacity of the AFS to mimic the aboveground–belowground interactions found in mature forests (MFs) and compare that with spontaneous secondary forest recovery. We linked microbiome features to measures of aboveground vegetation biomass, litter mass, and the topsoil physico‐chemical properties. By including the soil microbiome, we contribute to the design of more sustainable systems that better mimic the aboveground–belowground interactions of MFs.
MATERIALS AND METHODS 2.1 Field survey, site selection, and classification The study was conducted in the eastern periphery of Amazonia, on 56 study sites in six counties (Anajatuba–Itinga, Arari, Morros–Rosário, São Luís, Gurupi, and Tomé‐Açu), 40 of the 56 sites were located in central‐northern Maranhão state, the others were approximately 400 km further westward in Tomé‐Açu county in the eastern Pará state, Brazil (Figure ). The maximum distance between sites within each county was <30 km, and the maximum distance between counties within each regional cluster was <150 km. According to the Köppen classification, climate is Aw and Ami , and varies slightly between the two regional clusters (2100 mm annual rainfall in central Maranhão state and 2300 mm in eastern Pará state, with 6 and 5 months of hydric deficit, respectively). Soils are nutrient‐poor acid Oxisols or Ultisols (USDA, ), and the topsoil texture is loamy/fine sand. We classify and compare four types of spontaneous forests with three types of planted or partially planted agroforests. We cover spontaneous secondary forest succession in young, mid‐age and old spontaneous secondary forests and mature rainforest, and compare these with three types of agroforests (enriched fallow agroforest; homegarden agroforest; commercial plantation agroforest). Site selection and classification was based on the work of Cardozo et al. ( ) and Leite et al. ( ), as follows: (i) Spontaneous secondary and mature rainforests: Secondary forests following slash‐and‐burn shifting cultivation or on abandoned pastures. Young secondary forests (YSF) consisted of sites that were recently (5–12 years ago, five sites) to slash‐and‐burn agriculture (Pollini, ). Mid‐age secondary forests (MSF) represented sites where the last cycle of slash‐and‐burn agriculture occurred 15–20 years ago (six sites). Old secondary forests (OSF) grouped the sites reportedly in a fallow period of more than 30 years (seven sites). Mature forests (12 sites) were also distinguished, and indicated original MFs without any visible human perturbation or with low‐intensity selective logging >60 years ago. (ii) Agroforests: We distinguish in three types of AFS with contrasting structure and management: enriched fallow agroforests (EFAs, six sites), established by enrichment planting of fruit and timber species in the understory of 15–25‐year‐OSFs; homegarden agroforests (HAs, 13 sites), tall multistrata agroforests surrounding houses, virtually omnipresent in the study region and throughout the tropics (Kumar & Nair, ); and commercial plantation agroforests (CPAs, seven sites), regularly spaced plantations with inorganic fertilization and liming, elaborated or inspired by Japanese immigrants. Only CPA had received fertilization (NPK applied close to the plants and following the agronomic recommendations of each species) as well as initial liming. According to Cardozo et al. ( ), the most common species in the agroforests were: açaí ( Euterpe oleracea Mart.), mango ( Mangifera indica L.), banana ( Musa spp.), cupuassu ( Theobroma grandifolium Wild ex. Spreng), cocoa ( Theobroma cacao L.), and cashew ( Anacardium occidentale L.). Table classifies our 56 study sites according to their land‐use and geographic localization. 2.2 Sampling scheme, aboveground biomass estimation, and soil sampling We adopted a joint (synchronous and geosystematic) sampling scheme for all variables, to guarantee the compatibility of datasets for all investigated components. Vegetation and litter sampling strived to capture the differing scales of plant influence zones, as outlined in Rhoades ( ). We estimated aboveground biomass of large trees (AGB≥10 cm diameter at breast height) in the circular main plot (25 m radius, 1963 m 2 ), and the minor vegetation and litter in five subplots (25 and 1 m 2 for minor vegetation and litter, respectively). We obtained topsoil (0–20 cm) as composite samples from the centers of the five subplots. We adapted our sampling scheme in CPA to the different forest structure (regularly spaced tree plantation) contrasting with all other systems. Instead of a circle, we used three quadrangular main plots of 25 × 25 m. The subplots and transects were sampled as above. Further details about the sampling scheme are presented in previous studies (Cardozo et al., ; Leite et al., ) and can be found in Figure . Large biomass components were estimated allometrically via diameter‐based equations for mature rainforest trees (Overman et al., ), secondary forest trees (Nelson et al., ), lianas (Gehring et al., ) and, when present, babassu palms (Gehring et al., ), and also via conversions between the dbh and the diameter measured at a 30‐cm height for smaller vegetation components (Gehring et al., ). The following were distinguished: large vegetation (trees with dbh ≥10 cm and palms >2 m high) (AGB≥10 cm dbh); mid‐sized vegetation (trees, shrubs, and lianas with dbh <10 cm, and palms <2 m high); and small vegetation (herbaceous and shrubs <1.30 m high). Small vegetation was estimated destructively and jointly with the litter layer. For statistical analyses, mid‐sized and small vegetation were combined (AGB < 10 cm dbh). The biomass of fallen logs in transects (Brown, ; Chave et al., ; Van Wagner, ) and standing dead logs in the circular main plots were quantified following the line‐intercept method described in Arevalo et al. ( ). We estimated small (<1 m height) vegetation and the litter layer (distinguishing between leaves and twigs) destructively, dry matter contents were determined after oven‐drying at 65°C until constant weight. We sampled 0–20 cm soil in each sub‐quadrant as specified in Figure resulting in five samples per site for the spontaneous forests, EFA, and HA, and six samples for the CPA sites. Soil biological samples were stored on‐field at 4°C, and subsequently frozen at −80°C for DNA extraction. All sampling was performed during the rainy season (from mid‐January to early April 2015). As indicators of topsoil physical quality we determined soil bulk density (volumetric rings) and soil texture (via a pipette method), following procedures described in Klute et al. ( ). For topsoil chemistry, we followed the routines of the Agronomic Institute of Campinas‐IAC (van Raij et al., ) measuring the following indicators: pH, determined via soil suspension in 0.01 M CaCl 2 ; soil organic matter, determined by the Walkley‐Black digestion method; plant‐available P, estimated via extraction with a synthetic anion exchange resin Amberlite IRA‐400; exchangeable K, determined via Mehlich I extraction; Ca and Mg, determined via KCl extraction; and H + Al, determined by the Shoemaker–McLean–Pratt (SMP) method. 2.3 Amplicon‐based 16S and 18S r RNA gene analyses Total soil DNA was extracted from 0.25 g of soil using the Power Soil kit (Mobio), following the manufacturer's instructions. To assess the impact of treatments on the bacterial and fungal communities, we sequenced the 16S and 18S rRNA genes. The 16S rRNA sequencing target the V4 to amplify the archaeal/bacterial communities using the primers 515F (5′‐GTGCCAGCMGCCGCGGTAA‐3′) and 806R (5′‐GGACTACHVGGGTWTCTAAT‐3′) (Caporaso et al., ). For the 18S rRNA gene, marker selected target the fungal community via using the primers FR1 (5′‐AICCATTCAATCGGTAIT‐3′) and FF390.1 (5′‐CGWTAACGAACGAGACCT‐3′) (Verbruggen et al., ) which contains a small modification to detect Glomeraceae . The sequences were PCR amplified using barcoded primers (Caporaso et al., ). The 16S rRNA gene amplification for library preparation were performed using the C1000 thermocycler (Biorad) with the following thermal conditions: 95°C for 5 min; 35 cycles of 95°C for 30 s, 53°C for 30 s, and 72°C for 60 s; and 72°C for 10 min. A 25‐μl reaction contained 2.5 μl of 10× PCR buffer, 2.5 μl of dNTPs (200 μ m ), 0.25 μl of each primer (0.1 pmol/μl), 0.2 μl of FastStart Exp polymerase (0.056 U), and 1 μl of DNA (0.6 ng). The 18S rRNA gene amplification reactions were performed using 5 micromolar of each primer, 2 m m dNTPs (Invitrogen), 0.5 μl of BSA, 10 PCR buffer, 0.56 units of Fast Start Exp‐Polymerase, and 1 microliter of sample DNA template in a total reaction volume of 25 μl. The PCR was conducted with initial incubation of 5 min at 95°C followed by 25 cycles of 30 s at 95°C, 1 min. at the annealing temperature of 57°C, 1 min. at the extension temperature of 72°C, followed by a final extension for 10 min at 72°C. The reactions were performed in triplicate and a negative control was included. The amplicon sizes were checked by gel electrophoresis. PCR products were purified using the Agencourt AMPURE XP system (Beckman Coulter) to remove primer dimers, quantified using Fragment Analyzer (Perkin‐Elmer Corp.), and mixed in equimolar amounts for sequencing using Illumina MiSeq (Illumina Inc.). Sequences of the 16S and 18S rRNA partial gene amplicons were processed using dada2 workflow (Callahan et al., ) on a 32‐node server running Linux Ubuntu 14.4. The forward and reverse primer sequences were removed from the FASTQ file of each sample using Flexbar version 2.5 (Dodt et al., ). Reads were filtered based on sequence quality by running the Sickle tool (minimum quality score of 25 and minimum length of 150) (Joshi & Fass, ). Taxonomic information for each ASV was added to the BIOM file using the SILVA rRNA gene database (version 132) (Quast et al., ). Both bacterial and fungal communities were accurately characterized at genus level. The sequences were deposited in ENA database. In total, the sequencing resulted in 3,308,164 reads for bacteria and 3,334,355 for fungi, with an average of 13,726.82 reads of bacteria and 13,835.5 reads of fungi per sample. The rarefaction curves for both bacterial and fungal communities are presented in Figures and . 2.4 Statistical analysis Analysis of the soil microbial community using next‐generation sequencing data is challenging. Several studies pointed out the potential bias associated with the method of DNA extraction (Dimitrov et al., ), PCR, and sequencing (Kennedy et al., ). Altogether, these potential problems might mislead interpretations, especially when they are combined with distance measures traditionally adopted for the investigation of clustering and similarities between treatments (Warton et al., ). The composition of microbial communities in soil is tightly connected with soil characteristics (Cassman et al., ), nutrient availability (Delgado‐Baquerizo et al., ; Pan et al., ), plant biomass (Aponte et al., ), and symbiotic interactions (Albornoz et al., ). These parameters are in turn connected with land use and management practices (Barnes et al., ). These relationships make the use of environmental variables as predictors of the microbiome prone to bias toward collinearity and overfitting (Dormann et al., ). To circumvent this problem, we adopted the generalized joint attribute modeling (GJAM) (Clark et al., ). This model allows one to include variables of different types and to analyze them jointly, thus revealing the regression coefficients of the effects of different land uses in the relative abundance of taxa within the soil microbiome, as constrained by the compositionality (Gloor et al., ), the aboveground biomass and soil factors. For the microbial community data, GJAM also allows us to evaluate the model fit for both the abundance (in our case, the relative abundance, constrained by the compositionality) and the diversity (given by the Shannon index). According to those preliminary analysis the model we obtained a good explanatory capacity for the changes in microbial relative abundance (Figure ) though we underestimated the richness and overestimated the Shannon diversity (Figure ). Based on this outcome, we focused our analysis on shifts of the microbial community at the genus level, for which we obtained the best fit for understanding community variability. Since GJAM is based on Bayesian statistics, we obtained regression coefficients and considered them as significant when the 95% of the highest posterior density (HPD) interval does not include zero. In our study, zero represents the null hypothesis that there are no differences between the land‐use systems (secondary successional stages, agroforests, and MFs). For the current study we focused on the significant regression coefficients as a proxy of the changes in aboveground–belowground components (plant biomass, soil factors, and microbial communities) of the different land uses. Subsequently, we performed a hierarchical clustering analysis (Euclidean distance and Ward algorithm) of the different regression coefficient to identify similarities in the responses to the land use. Lack of occurrence of MFs in every county rendered geographic distance as a potential factor affecting the results and was, therefore, included in modeling. GJAM allows to include random effects, which accounts for the within site replicates and regional (between site clusters) variability. Another advantage of the GJAM approach is the possibility to perform conditional prediction that allowed us to simulate scenarios for some specific set of dependent variables. We adopted this tool to simulate a scenario where all the land‐use systems (agroforests and secondary regrowth) have the same microbiome found in the MFs (microbiome as predicted by the model). The intention here was to compare how much the soil factors and plant biomass should differ from the original value recorded during the measurements in each sampling site to achieve the microbiome of the mature rainforest. The level of change for each variable was summarized as a ratio of change (the ratio between the simulated value and the original value found in each site). We employed this approach to model the effects of environmental factors (the AGB and soil factors) on the community structure and interactions. All the above‐mentioned analyses of bacterial and fungal community were done at genus level. All analyses were performed in R using a combination of the packages gjam (Clark et al., ), pvclust (Suzuki & Shimodaira, ), ggplot2 (Wilkinson, ), and flipPlots (Displayr, ).
Field survey, site selection, and classification The study was conducted in the eastern periphery of Amazonia, on 56 study sites in six counties (Anajatuba–Itinga, Arari, Morros–Rosário, São Luís, Gurupi, and Tomé‐Açu), 40 of the 56 sites were located in central‐northern Maranhão state, the others were approximately 400 km further westward in Tomé‐Açu county in the eastern Pará state, Brazil (Figure ). The maximum distance between sites within each county was <30 km, and the maximum distance between counties within each regional cluster was <150 km. According to the Köppen classification, climate is Aw and Ami , and varies slightly between the two regional clusters (2100 mm annual rainfall in central Maranhão state and 2300 mm in eastern Pará state, with 6 and 5 months of hydric deficit, respectively). Soils are nutrient‐poor acid Oxisols or Ultisols (USDA, ), and the topsoil texture is loamy/fine sand. We classify and compare four types of spontaneous forests with three types of planted or partially planted agroforests. We cover spontaneous secondary forest succession in young, mid‐age and old spontaneous secondary forests and mature rainforest, and compare these with three types of agroforests (enriched fallow agroforest; homegarden agroforest; commercial plantation agroforest). Site selection and classification was based on the work of Cardozo et al. ( ) and Leite et al. ( ), as follows: (i) Spontaneous secondary and mature rainforests: Secondary forests following slash‐and‐burn shifting cultivation or on abandoned pastures. Young secondary forests (YSF) consisted of sites that were recently (5–12 years ago, five sites) to slash‐and‐burn agriculture (Pollini, ). Mid‐age secondary forests (MSF) represented sites where the last cycle of slash‐and‐burn agriculture occurred 15–20 years ago (six sites). Old secondary forests (OSF) grouped the sites reportedly in a fallow period of more than 30 years (seven sites). Mature forests (12 sites) were also distinguished, and indicated original MFs without any visible human perturbation or with low‐intensity selective logging >60 years ago. (ii) Agroforests: We distinguish in three types of AFS with contrasting structure and management: enriched fallow agroforests (EFAs, six sites), established by enrichment planting of fruit and timber species in the understory of 15–25‐year‐OSFs; homegarden agroforests (HAs, 13 sites), tall multistrata agroforests surrounding houses, virtually omnipresent in the study region and throughout the tropics (Kumar & Nair, ); and commercial plantation agroforests (CPAs, seven sites), regularly spaced plantations with inorganic fertilization and liming, elaborated or inspired by Japanese immigrants. Only CPA had received fertilization (NPK applied close to the plants and following the agronomic recommendations of each species) as well as initial liming. According to Cardozo et al. ( ), the most common species in the agroforests were: açaí ( Euterpe oleracea Mart.), mango ( Mangifera indica L.), banana ( Musa spp.), cupuassu ( Theobroma grandifolium Wild ex. Spreng), cocoa ( Theobroma cacao L.), and cashew ( Anacardium occidentale L.). Table classifies our 56 study sites according to their land‐use and geographic localization.
Sampling scheme, aboveground biomass estimation, and soil sampling We adopted a joint (synchronous and geosystematic) sampling scheme for all variables, to guarantee the compatibility of datasets for all investigated components. Vegetation and litter sampling strived to capture the differing scales of plant influence zones, as outlined in Rhoades ( ). We estimated aboveground biomass of large trees (AGB≥10 cm diameter at breast height) in the circular main plot (25 m radius, 1963 m 2 ), and the minor vegetation and litter in five subplots (25 and 1 m 2 for minor vegetation and litter, respectively). We obtained topsoil (0–20 cm) as composite samples from the centers of the five subplots. We adapted our sampling scheme in CPA to the different forest structure (regularly spaced tree plantation) contrasting with all other systems. Instead of a circle, we used three quadrangular main plots of 25 × 25 m. The subplots and transects were sampled as above. Further details about the sampling scheme are presented in previous studies (Cardozo et al., ; Leite et al., ) and can be found in Figure . Large biomass components were estimated allometrically via diameter‐based equations for mature rainforest trees (Overman et al., ), secondary forest trees (Nelson et al., ), lianas (Gehring et al., ) and, when present, babassu palms (Gehring et al., ), and also via conversions between the dbh and the diameter measured at a 30‐cm height for smaller vegetation components (Gehring et al., ). The following were distinguished: large vegetation (trees with dbh ≥10 cm and palms >2 m high) (AGB≥10 cm dbh); mid‐sized vegetation (trees, shrubs, and lianas with dbh <10 cm, and palms <2 m high); and small vegetation (herbaceous and shrubs <1.30 m high). Small vegetation was estimated destructively and jointly with the litter layer. For statistical analyses, mid‐sized and small vegetation were combined (AGB < 10 cm dbh). The biomass of fallen logs in transects (Brown, ; Chave et al., ; Van Wagner, ) and standing dead logs in the circular main plots were quantified following the line‐intercept method described in Arevalo et al. ( ). We estimated small (<1 m height) vegetation and the litter layer (distinguishing between leaves and twigs) destructively, dry matter contents were determined after oven‐drying at 65°C until constant weight. We sampled 0–20 cm soil in each sub‐quadrant as specified in Figure resulting in five samples per site for the spontaneous forests, EFA, and HA, and six samples for the CPA sites. Soil biological samples were stored on‐field at 4°C, and subsequently frozen at −80°C for DNA extraction. All sampling was performed during the rainy season (from mid‐January to early April 2015). As indicators of topsoil physical quality we determined soil bulk density (volumetric rings) and soil texture (via a pipette method), following procedures described in Klute et al. ( ). For topsoil chemistry, we followed the routines of the Agronomic Institute of Campinas‐IAC (van Raij et al., ) measuring the following indicators: pH, determined via soil suspension in 0.01 M CaCl 2 ; soil organic matter, determined by the Walkley‐Black digestion method; plant‐available P, estimated via extraction with a synthetic anion exchange resin Amberlite IRA‐400; exchangeable K, determined via Mehlich I extraction; Ca and Mg, determined via KCl extraction; and H + Al, determined by the Shoemaker–McLean–Pratt (SMP) method.
Amplicon‐based 16S and 18S r RNA gene analyses Total soil DNA was extracted from 0.25 g of soil using the Power Soil kit (Mobio), following the manufacturer's instructions. To assess the impact of treatments on the bacterial and fungal communities, we sequenced the 16S and 18S rRNA genes. The 16S rRNA sequencing target the V4 to amplify the archaeal/bacterial communities using the primers 515F (5′‐GTGCCAGCMGCCGCGGTAA‐3′) and 806R (5′‐GGACTACHVGGGTWTCTAAT‐3′) (Caporaso et al., ). For the 18S rRNA gene, marker selected target the fungal community via using the primers FR1 (5′‐AICCATTCAATCGGTAIT‐3′) and FF390.1 (5′‐CGWTAACGAACGAGACCT‐3′) (Verbruggen et al., ) which contains a small modification to detect Glomeraceae . The sequences were PCR amplified using barcoded primers (Caporaso et al., ). The 16S rRNA gene amplification for library preparation were performed using the C1000 thermocycler (Biorad) with the following thermal conditions: 95°C for 5 min; 35 cycles of 95°C for 30 s, 53°C for 30 s, and 72°C for 60 s; and 72°C for 10 min. A 25‐μl reaction contained 2.5 μl of 10× PCR buffer, 2.5 μl of dNTPs (200 μ m ), 0.25 μl of each primer (0.1 pmol/μl), 0.2 μl of FastStart Exp polymerase (0.056 U), and 1 μl of DNA (0.6 ng). The 18S rRNA gene amplification reactions were performed using 5 micromolar of each primer, 2 m m dNTPs (Invitrogen), 0.5 μl of BSA, 10 PCR buffer, 0.56 units of Fast Start Exp‐Polymerase, and 1 microliter of sample DNA template in a total reaction volume of 25 μl. The PCR was conducted with initial incubation of 5 min at 95°C followed by 25 cycles of 30 s at 95°C, 1 min. at the annealing temperature of 57°C, 1 min. at the extension temperature of 72°C, followed by a final extension for 10 min at 72°C. The reactions were performed in triplicate and a negative control was included. The amplicon sizes were checked by gel electrophoresis. PCR products were purified using the Agencourt AMPURE XP system (Beckman Coulter) to remove primer dimers, quantified using Fragment Analyzer (Perkin‐Elmer Corp.), and mixed in equimolar amounts for sequencing using Illumina MiSeq (Illumina Inc.). Sequences of the 16S and 18S rRNA partial gene amplicons were processed using dada2 workflow (Callahan et al., ) on a 32‐node server running Linux Ubuntu 14.4. The forward and reverse primer sequences were removed from the FASTQ file of each sample using Flexbar version 2.5 (Dodt et al., ). Reads were filtered based on sequence quality by running the Sickle tool (minimum quality score of 25 and minimum length of 150) (Joshi & Fass, ). Taxonomic information for each ASV was added to the BIOM file using the SILVA rRNA gene database (version 132) (Quast et al., ). Both bacterial and fungal communities were accurately characterized at genus level. The sequences were deposited in ENA database. In total, the sequencing resulted in 3,308,164 reads for bacteria and 3,334,355 for fungi, with an average of 13,726.82 reads of bacteria and 13,835.5 reads of fungi per sample. The rarefaction curves for both bacterial and fungal communities are presented in Figures and .
Statistical analysis Analysis of the soil microbial community using next‐generation sequencing data is challenging. Several studies pointed out the potential bias associated with the method of DNA extraction (Dimitrov et al., ), PCR, and sequencing (Kennedy et al., ). Altogether, these potential problems might mislead interpretations, especially when they are combined with distance measures traditionally adopted for the investigation of clustering and similarities between treatments (Warton et al., ). The composition of microbial communities in soil is tightly connected with soil characteristics (Cassman et al., ), nutrient availability (Delgado‐Baquerizo et al., ; Pan et al., ), plant biomass (Aponte et al., ), and symbiotic interactions (Albornoz et al., ). These parameters are in turn connected with land use and management practices (Barnes et al., ). These relationships make the use of environmental variables as predictors of the microbiome prone to bias toward collinearity and overfitting (Dormann et al., ). To circumvent this problem, we adopted the generalized joint attribute modeling (GJAM) (Clark et al., ). This model allows one to include variables of different types and to analyze them jointly, thus revealing the regression coefficients of the effects of different land uses in the relative abundance of taxa within the soil microbiome, as constrained by the compositionality (Gloor et al., ), the aboveground biomass and soil factors. For the microbial community data, GJAM also allows us to evaluate the model fit for both the abundance (in our case, the relative abundance, constrained by the compositionality) and the diversity (given by the Shannon index). According to those preliminary analysis the model we obtained a good explanatory capacity for the changes in microbial relative abundance (Figure ) though we underestimated the richness and overestimated the Shannon diversity (Figure ). Based on this outcome, we focused our analysis on shifts of the microbial community at the genus level, for which we obtained the best fit for understanding community variability. Since GJAM is based on Bayesian statistics, we obtained regression coefficients and considered them as significant when the 95% of the highest posterior density (HPD) interval does not include zero. In our study, zero represents the null hypothesis that there are no differences between the land‐use systems (secondary successional stages, agroforests, and MFs). For the current study we focused on the significant regression coefficients as a proxy of the changes in aboveground–belowground components (plant biomass, soil factors, and microbial communities) of the different land uses. Subsequently, we performed a hierarchical clustering analysis (Euclidean distance and Ward algorithm) of the different regression coefficient to identify similarities in the responses to the land use. Lack of occurrence of MFs in every county rendered geographic distance as a potential factor affecting the results and was, therefore, included in modeling. GJAM allows to include random effects, which accounts for the within site replicates and regional (between site clusters) variability. Another advantage of the GJAM approach is the possibility to perform conditional prediction that allowed us to simulate scenarios for some specific set of dependent variables. We adopted this tool to simulate a scenario where all the land‐use systems (agroforests and secondary regrowth) have the same microbiome found in the MFs (microbiome as predicted by the model). The intention here was to compare how much the soil factors and plant biomass should differ from the original value recorded during the measurements in each sampling site to achieve the microbiome of the mature rainforest. The level of change for each variable was summarized as a ratio of change (the ratio between the simulated value and the original value found in each site). We employed this approach to model the effects of environmental factors (the AGB and soil factors) on the community structure and interactions. All the above‐mentioned analyses of bacterial and fungal community were done at genus level. All analyses were performed in R using a combination of the packages gjam (Clark et al., ), pvclust (Suzuki & Shimodaira, ), ggplot2 (Wilkinson, ), and flipPlots (Displayr, ).
RESULTS 3.1 AFS are bacteria driven whereas secondary succession is fungal‐driven ecosystems We use mature rainforest (MF) without any visible human perturbation as a tropical rainforest standard and compare these with three different agroforestry practices and with spontaneous secondary successions following slash‐and‐burn agriculture. The secondary successions (YSF, MSF, and OSF) had the most similar characteristics across sites (Figure ). The aboveground regrowth is characterized by an increasing number of Ascomycota and Basidiomycota fungi (Figure , clusters C1–C3) that clustered with the MF (Figure ). A second cluster grouped the homegardens (HA) together with the commercial plantation agroforests (CPA), and the enriched agroforests (EFAs), likely due to reduced proportion of some specific fungal genera of Ascomycota and Basidiomycota phyla (Figure , clusters C1–C4) and bacterial genera of Proteobacteria phylum (Figure , clusters C5–C6), with marked differences from MF (Figure ). Agroforest soils had higher topsoil pH, Ca‐, and Mg availability, lower soil porosity, soil saturation, and soil moisture than the spontaneous forests. CPA and HA also had higher concentration of available P and soil organic matter than the YSF and MSF spontaneous forests. We observed significant increase in soil organic matter along the spontaneous succession (YSF, MSF, and OSF). Aluminum saturation (H + Al) also increased along secondary succession being highest in the MF sites. The MF sites showed more similarities with secondary forests than with AFS (Figure ). In general, all spontaneous forests exhibited a high proportion of different fungal genera of the phyla Ascomycota (YSF = 8, MSF = 8, and OSF = 16), Basidiomycota (YSF = 3, MSF = 6, and OSF = 9) (Figure ), and Mucoromycota (YSF = 2 and OSF = 6). Within the Mucoromycota phylum, we found two unclassified genera of arbuscular mycorrhizal fungi (AMF) (uncultured Glomeromycetes and Glomerales ), both significantly more abundant in MF and in secondary succession forests (OSF and YSF) than in the AFS. In contrast, the AFS had a higher relative abundance of numerous bacterial genera (EFA = 44, CPA = 7, and HA = 53), yet only few fungal taxa (EFA = 6, CPA = 7, and HA = 25). The bacteria belonged to the phyla Acidobacteria , Actinobacteria , Bacteriodetes , Cloroflexi , Chytridiomycota , Planctomycetes , Proteobacteria , Thaumarchaeota , and Verrucomicrobia . Interestingly, Glomeromycetes clustered with variables of aboveground biomass (C3) but not with available P‐content in soil (C4). Glomeromycetes is a class of fungi that comprise AMG. The collective changes in plant biomass and soil factors also contributed to the distinction between the land‐use systems (Figures and ). Overall, the MF systems had the highest values for total aboveground biomass (TAGB), followed by the OSF and HA. Moreover, the regression coefficient of aboveground biomass shifts from negative to positive from the YSF to the OSF, suggesting a gain of plant biomass along the spontaneous secondary regrowth (Figures and ). The increase in the regression coefficient of TAGB from YSF to OSF and its similarity with the MF reflect the regrowth of plant biomass from secondary forests to mature rainforest level. Considering only AGB and soil factors, OSF (>30 years) even clustered together with MF. Our results show that the soil microbiome along secondary succession also seemed to recover more in the direction of the MF microbiome. By contrast, the AFS (EFA, CPA, and HA) clustered together and differed markedly from the secondary forest successional trajectory. This clustering of AFS distant from spontaneous was driven by an increased importance of bacterial communities, followed by changes in the understory biomass (plants <10 cm diameter at breast height—dbh) and in soil nutrients, mainly high K availability, and a tendency toward low Mg availability. Figure summarizes the general trends in microbial community shifts, changes in plant biomass and soil factors, as a departure from our null hypothesis (no differences between the land‐use systems, Section ). The spontaneous forests became increasingly different along the natural succession. The number of significant positive shifts (*pos) increased from 16 in the YSF to 28 in MSF, and 48 in OSF sites. However, they remained distinct from the MF that presented 71 significant positive changes. Similar patterns appeared for the negative shifts (*neg), for which the YSF started with 17 significant changes, followed by 34 in the MSF, 25 in OSF, and 70 in MF sites. Apart from that, the gray curves indicated whether the same set of positive and negative coefficients remained significant or not from one land use to the other. From that, the spontaneous forests (YSF, MSF, and OSF) increased similarly to the MF as the natural succession progress. On the other hand, the different agroforest systems follow a distinct path that is represented by the oscillating curves, an outcome of the increased importance of bacterial communities in those land‐use systems. All three AFS fostered bacterial groups with clear differences between them. Homegarden agroforestry promoted the abundance of bacterial groups in clusters C5, C6, and C8. Within these clusters, the top five strongest shifts in bacterial abundances that characterize the HA were: uncultured bacterium from the BIrii41 family (C5), Solirubrobacter (C5), Nitrospira (C6), Rhizobium (C5), and Frankiales (C6). Only few groups of fungi more abundant in the HA were also common in the MFs: six fungal taxa from cluster C2 ( Corollospora , Metarhizium , uncultured Ajellomycetaceae , Ascotrichia , unc. Rhizophydiales , and Dendrochytridium ); three fungal taxa from cluster C3 ( Myrothecium , Mortirella , and Apiotrichum ); and six uncultured fungi from cluster C4 ( Tremellales , Sordariales , Lobulomycetaceae , Polyporales , Aspergilaceae , and Chaetothyriales ). In CPA sites, only eight fungal genera were also common in the MF system: Archeorhizomycetes , Hygrocybe , unc. Stereaceae , unc. Rhizophydiales , Ascotricha , and Dendrochytridium from cluster C2; and unc. Tremellales and unc. Sordariales from cluster C4. The other groups that characterize the CPA land use system are mainly composed by bacterial groups notably the top 5: Chloroflexi KD4‐96, Acidobacteria Subgroup 7, Nitrospirales 0319‐6A21, Nitrospirales 4‐29, and unc. Frankiales . Finally, EFAs were—according to the dendrogram in Figure —the agroforestry system closest with the MF. This is likely because the both systems promoted the abundance of the same taxa present in cluster C7 (22 taxa in total, with only one fungal genus, Saitozyma ). The only archaea taxon that significantly responded to the different land‐use systems (unc. Soil Crenarchaeotic Group) was likewise abundant in both EFA and MF sites. The top 5 most dominant group of bacteria that characterize the EFA system are: Tumebacillus , Solirubrobacter , Massilia , Bacillus , and unc. Actinobacteria 480‐2, all of them from cluster C5. Our model also revealed which microbial genera, plant biomass, and soil factors responded similarly to the land‐use changes via the hierarchical clustering of variables (Figure , Supplementary Results present a more detailed description of each cluster). Cluster C1 grouped the response of 19 different fungal genera that belonged to eight different classes (seven Agaricomycetes , three Chytridiomycota incertae sedis , three Dothideomycetes , two Mucoromycota incertae sedis , one Glomeromycetes , one Leotiomycetes , one Xylonomycetes , and one unclassified genus from the phyla Ascomycota ), the majority of them increased their relative abundance along secondary forest succession (YSF → MSF → OSF). Cluster C2 was also composed largely by fungal genera (24 in total), grouped in nine distinct classes (six Sordariomycetes , five Chytridiomycota incertae sedis , four Agaricomycetes , four Eurotiomycetes , one Archaeorhizomycetes , one Pezizomycetes , one Tremellomycetes , and two unclassified genera belonging to the phyla Basidiomycota and Cryptomycota , respectively), as well as one Gammaproteobacteria of the genus Acinetobacter . Cluster C2 also contained the regression coefficients for the changes in understory plant biomass (Plant<10 cm dbh), suggesting that those fungal genera are associated with the increased gain of understory plant biomass that occurred in spontaneous forests, but also related with the reduced importance of this biomass in AFS. From Cluster C3 onwards, we observed a mixture of variable groups. This cluster contained four bacteria genera ( Pseudomonas , Candidatus Koribacter , Candidatus Xiphinematobacter , and Inquilinus ), and 14 different genera of fungi from six different classes (five Sordariomycetes , three Eurotiomycetes , two Agaricomycetes , two Mucoromycota incertae sedis , one Glomeromycetes , and one Tremellomycetes ). In general, the variables in this cluster presented positive coefficients for the spontaneous forests (YSF, MSF, OSF, and MF) and negative shifts for the AFS (EFA, HA, and CPA), with some exceptions. Notably, the fungal genera Trichoderma , Apiotrichum , Mortierella , Geastrum , Myrothecium , and an unclassified genus from the class Glomeromycetes aggregated in cluster C3 with variables of TAGB, living aboveground biomass, and the biomass of plants >10 cm dbh (the group of plants that represent the canopy) (Figure ). Those variables shift from negative to positive coefficients along secondary succession but are also positive in the HAs. In cluster C4, we found 19 genera of fungi associated with shrub aboveground biomass and variables of leaf litter and dead logs, all the fungi general belonged to six different classes (six Eurotiomycetes , four Dothideomycetes , four Sordariomycetes , one Agaricomycetes , one Chytridiomycota incertae sedis , one Tremellomycetes , and one unclassified genus from the phyla LKM15). Cluster C4 also comprised the shifts of five soil factors (pH, soil water content, soil porosity, K, and P) and four different taxa of bacteria (AKYH767, unclassified Cytophagaceae , unclassified Nitrospirales 4‐29, and HSB OF53‐F07). For clusters C5 and C6, variables more relevant in the AFS predominate. Cluster C5 contained 19 bacterial genera grouped in 11 classes, namely: four genera of Bacilli ; Actinobacteria , Alphaproteobacteria , Betaproteobacteria , Deltaproteobacteria , and Thermoleophilia with two genera each; and Acidobacteria , Anaerolineae , Holophagae , KD4‐96, and Nitrospira with one genus each. Cluster 6 comprised 27 genera of bacteria from 15 distinct classes: the Betaproteobacteria , Deltaproteobacteria , and Alphaproteobacteria , with five, four, and three genera, respectively; the Acidobacteria , Actinobacteria , and Gemmatimonadetes with two genera each; and Acidimicrobiia , JG30‐KF‐CM66, Nitrospira , Phycisphaerae , Planctomycetacia , S‐BQ2‐57 soil group, Spartobacteria , Sphingobacteriia , Thermoleophilia , and TK10 with one genus each. All those microbes are significantly more abundant in the AFS than in the spontaneous forests. Clusters C7 grouped 31 bacteria genera from 10 classes that become increasingly relevant along secondary succession but were also important in the EFA system: Acidobacteria (9), Alphaproteobacteria (8), Actinobacteria (3), Thermoleophilia (3), Gammaproteobacteria (2), Ktedonobacteria (2), Melainabacteria (1), OPB35 soil group (1), Planctomycetacia (1), and Sphingobacteriia (1). Finally, cluster C8 represents the bacteria and the archaea that became more relevant (significant positive coefficients) in the EFA and HA systems, with the only exception of five bacterial genera ( Coxiella , Methylobacterium , Rhodomicrobium , unclassified Rhizobiales , and Byssovorax ) which were abundant only in the EFA systems. Altogether, bacterial community responses generally tracked trends found in the soil factors (Ca and Mg in cluster C6, and soil organic matter in cluster C8) and only 10 bacterial genera were related with changes in twigs biomass (cluster C5). In summary, bacterial community played a major role in the microbiome of the three AFS whereas soil fungal community increased in relative abundance along secondary succession, (Figures and ). 3.2 Conditional modeling provides guidelines for agroforestry systems to better mimic the mature forest The joint analysis of the microbiome, the aboveground biomass, and soil physical and chemical characteristics allowed us to simulate scenarios which evaluate how vegetation biomass and soil factors would need to shift under a specific condition (Section ). Since our goal was to investigate the capacity of AFS to mimic the aboveground–belowground interactions found in mature rainforests, we simulated a scenario in which all the different land use systems have the microbial community estimated for the MFs, thus allowing the model to obtain the necessary values of both plant and soil factors required to achieve that condition. This analysis allowed us to identify the site‐specific variables that would need to change in order to attain the MF microbiome. The results from these simulations returned the ratio of changes from each system (Figure ). The secondary forests (OSF, MSF, and YSF) showed the lowest ratios of change. For the OSF, to have the same microbiome of MF, the biggest relative changes would need to occur within the biomass of shrubs with a median increase of 2.3 times the original value, followed by a 1.8‐fold increase in the biomass of plants >10 cm dbh. For the MSF, the microbiome would require more than fourfold increase in the shrubs and dead logs. Finally, for the YSF, the microbiome would require 5.3‐fold increase in shrubs biomass, 4.4‐fold increase in dead logs, and a 4.2‐fold increase in plants >10 cm dbh. Only a small percentage of the secondary forest sites required ratios of changes above 2.5‐fold, 25% of YSF sites for the available P and 36% of YSF for available K, in 25% of the MSF sites available K‐content would need to increase by 5.2 times, also in one OSF site available K would need to increase by 5.1 times. In marked contrast, all three AFS would require very large ratios of change, especially for the plant biomass variables. Our simulation results show that, in order to achieve the MF microbiome, EFA would require 3.7 times more dead logs biomass, as well as 2.6 times more litter mass and shrub biomass. The CPA systems would require similar increases in the dead logs and shrub components (more than 3.2 times) and 2.3 times increase in plants <10 cm dbh. By contrast, the soil factors were less relevant than vegetation parameters along the spontaneous secondary forest succession where most of the median values were close to the ratio of 1. Of the three AFS, the HA exhibited the highest ratios of necessary changes for the soil variables, requiring an increase in the soil nutrients (P, K, Ca, and Mg) for more than 75% of their sites (Figure ). Despite that, for the other AFS (EFA and CPA) the ratios of changes were below twofold for 75% of their sites. Notably, some of the sites should even reduce the availability of soil nutrients such as the soil P content in EFA systems and nearly half of the CPA sites. In summary, the changes in aboveground biomass variables played a major role in allowing the different AFS to reach the microbiome of Amazon MF.
AFS are bacteria driven whereas secondary succession is fungal‐driven ecosystems We use mature rainforest (MF) without any visible human perturbation as a tropical rainforest standard and compare these with three different agroforestry practices and with spontaneous secondary successions following slash‐and‐burn agriculture. The secondary successions (YSF, MSF, and OSF) had the most similar characteristics across sites (Figure ). The aboveground regrowth is characterized by an increasing number of Ascomycota and Basidiomycota fungi (Figure , clusters C1–C3) that clustered with the MF (Figure ). A second cluster grouped the homegardens (HA) together with the commercial plantation agroforests (CPA), and the enriched agroforests (EFAs), likely due to reduced proportion of some specific fungal genera of Ascomycota and Basidiomycota phyla (Figure , clusters C1–C4) and bacterial genera of Proteobacteria phylum (Figure , clusters C5–C6), with marked differences from MF (Figure ). Agroforest soils had higher topsoil pH, Ca‐, and Mg availability, lower soil porosity, soil saturation, and soil moisture than the spontaneous forests. CPA and HA also had higher concentration of available P and soil organic matter than the YSF and MSF spontaneous forests. We observed significant increase in soil organic matter along the spontaneous succession (YSF, MSF, and OSF). Aluminum saturation (H + Al) also increased along secondary succession being highest in the MF sites. The MF sites showed more similarities with secondary forests than with AFS (Figure ). In general, all spontaneous forests exhibited a high proportion of different fungal genera of the phyla Ascomycota (YSF = 8, MSF = 8, and OSF = 16), Basidiomycota (YSF = 3, MSF = 6, and OSF = 9) (Figure ), and Mucoromycota (YSF = 2 and OSF = 6). Within the Mucoromycota phylum, we found two unclassified genera of arbuscular mycorrhizal fungi (AMF) (uncultured Glomeromycetes and Glomerales ), both significantly more abundant in MF and in secondary succession forests (OSF and YSF) than in the AFS. In contrast, the AFS had a higher relative abundance of numerous bacterial genera (EFA = 44, CPA = 7, and HA = 53), yet only few fungal taxa (EFA = 6, CPA = 7, and HA = 25). The bacteria belonged to the phyla Acidobacteria , Actinobacteria , Bacteriodetes , Cloroflexi , Chytridiomycota , Planctomycetes , Proteobacteria , Thaumarchaeota , and Verrucomicrobia . Interestingly, Glomeromycetes clustered with variables of aboveground biomass (C3) but not with available P‐content in soil (C4). Glomeromycetes is a class of fungi that comprise AMG. The collective changes in plant biomass and soil factors also contributed to the distinction between the land‐use systems (Figures and ). Overall, the MF systems had the highest values for total aboveground biomass (TAGB), followed by the OSF and HA. Moreover, the regression coefficient of aboveground biomass shifts from negative to positive from the YSF to the OSF, suggesting a gain of plant biomass along the spontaneous secondary regrowth (Figures and ). The increase in the regression coefficient of TAGB from YSF to OSF and its similarity with the MF reflect the regrowth of plant biomass from secondary forests to mature rainforest level. Considering only AGB and soil factors, OSF (>30 years) even clustered together with MF. Our results show that the soil microbiome along secondary succession also seemed to recover more in the direction of the MF microbiome. By contrast, the AFS (EFA, CPA, and HA) clustered together and differed markedly from the secondary forest successional trajectory. This clustering of AFS distant from spontaneous was driven by an increased importance of bacterial communities, followed by changes in the understory biomass (plants <10 cm diameter at breast height—dbh) and in soil nutrients, mainly high K availability, and a tendency toward low Mg availability. Figure summarizes the general trends in microbial community shifts, changes in plant biomass and soil factors, as a departure from our null hypothesis (no differences between the land‐use systems, Section ). The spontaneous forests became increasingly different along the natural succession. The number of significant positive shifts (*pos) increased from 16 in the YSF to 28 in MSF, and 48 in OSF sites. However, they remained distinct from the MF that presented 71 significant positive changes. Similar patterns appeared for the negative shifts (*neg), for which the YSF started with 17 significant changes, followed by 34 in the MSF, 25 in OSF, and 70 in MF sites. Apart from that, the gray curves indicated whether the same set of positive and negative coefficients remained significant or not from one land use to the other. From that, the spontaneous forests (YSF, MSF, and OSF) increased similarly to the MF as the natural succession progress. On the other hand, the different agroforest systems follow a distinct path that is represented by the oscillating curves, an outcome of the increased importance of bacterial communities in those land‐use systems. All three AFS fostered bacterial groups with clear differences between them. Homegarden agroforestry promoted the abundance of bacterial groups in clusters C5, C6, and C8. Within these clusters, the top five strongest shifts in bacterial abundances that characterize the HA were: uncultured bacterium from the BIrii41 family (C5), Solirubrobacter (C5), Nitrospira (C6), Rhizobium (C5), and Frankiales (C6). Only few groups of fungi more abundant in the HA were also common in the MFs: six fungal taxa from cluster C2 ( Corollospora , Metarhizium , uncultured Ajellomycetaceae , Ascotrichia , unc. Rhizophydiales , and Dendrochytridium ); three fungal taxa from cluster C3 ( Myrothecium , Mortirella , and Apiotrichum ); and six uncultured fungi from cluster C4 ( Tremellales , Sordariales , Lobulomycetaceae , Polyporales , Aspergilaceae , and Chaetothyriales ). In CPA sites, only eight fungal genera were also common in the MF system: Archeorhizomycetes , Hygrocybe , unc. Stereaceae , unc. Rhizophydiales , Ascotricha , and Dendrochytridium from cluster C2; and unc. Tremellales and unc. Sordariales from cluster C4. The other groups that characterize the CPA land use system are mainly composed by bacterial groups notably the top 5: Chloroflexi KD4‐96, Acidobacteria Subgroup 7, Nitrospirales 0319‐6A21, Nitrospirales 4‐29, and unc. Frankiales . Finally, EFAs were—according to the dendrogram in Figure —the agroforestry system closest with the MF. This is likely because the both systems promoted the abundance of the same taxa present in cluster C7 (22 taxa in total, with only one fungal genus, Saitozyma ). The only archaea taxon that significantly responded to the different land‐use systems (unc. Soil Crenarchaeotic Group) was likewise abundant in both EFA and MF sites. The top 5 most dominant group of bacteria that characterize the EFA system are: Tumebacillus , Solirubrobacter , Massilia , Bacillus , and unc. Actinobacteria 480‐2, all of them from cluster C5. Our model also revealed which microbial genera, plant biomass, and soil factors responded similarly to the land‐use changes via the hierarchical clustering of variables (Figure , Supplementary Results present a more detailed description of each cluster). Cluster C1 grouped the response of 19 different fungal genera that belonged to eight different classes (seven Agaricomycetes , three Chytridiomycota incertae sedis , three Dothideomycetes , two Mucoromycota incertae sedis , one Glomeromycetes , one Leotiomycetes , one Xylonomycetes , and one unclassified genus from the phyla Ascomycota ), the majority of them increased their relative abundance along secondary forest succession (YSF → MSF → OSF). Cluster C2 was also composed largely by fungal genera (24 in total), grouped in nine distinct classes (six Sordariomycetes , five Chytridiomycota incertae sedis , four Agaricomycetes , four Eurotiomycetes , one Archaeorhizomycetes , one Pezizomycetes , one Tremellomycetes , and two unclassified genera belonging to the phyla Basidiomycota and Cryptomycota , respectively), as well as one Gammaproteobacteria of the genus Acinetobacter . Cluster C2 also contained the regression coefficients for the changes in understory plant biomass (Plant<10 cm dbh), suggesting that those fungal genera are associated with the increased gain of understory plant biomass that occurred in spontaneous forests, but also related with the reduced importance of this biomass in AFS. From Cluster C3 onwards, we observed a mixture of variable groups. This cluster contained four bacteria genera ( Pseudomonas , Candidatus Koribacter , Candidatus Xiphinematobacter , and Inquilinus ), and 14 different genera of fungi from six different classes (five Sordariomycetes , three Eurotiomycetes , two Agaricomycetes , two Mucoromycota incertae sedis , one Glomeromycetes , and one Tremellomycetes ). In general, the variables in this cluster presented positive coefficients for the spontaneous forests (YSF, MSF, OSF, and MF) and negative shifts for the AFS (EFA, HA, and CPA), with some exceptions. Notably, the fungal genera Trichoderma , Apiotrichum , Mortierella , Geastrum , Myrothecium , and an unclassified genus from the class Glomeromycetes aggregated in cluster C3 with variables of TAGB, living aboveground biomass, and the biomass of plants >10 cm dbh (the group of plants that represent the canopy) (Figure ). Those variables shift from negative to positive coefficients along secondary succession but are also positive in the HAs. In cluster C4, we found 19 genera of fungi associated with shrub aboveground biomass and variables of leaf litter and dead logs, all the fungi general belonged to six different classes (six Eurotiomycetes , four Dothideomycetes , four Sordariomycetes , one Agaricomycetes , one Chytridiomycota incertae sedis , one Tremellomycetes , and one unclassified genus from the phyla LKM15). Cluster C4 also comprised the shifts of five soil factors (pH, soil water content, soil porosity, K, and P) and four different taxa of bacteria (AKYH767, unclassified Cytophagaceae , unclassified Nitrospirales 4‐29, and HSB OF53‐F07). For clusters C5 and C6, variables more relevant in the AFS predominate. Cluster C5 contained 19 bacterial genera grouped in 11 classes, namely: four genera of Bacilli ; Actinobacteria , Alphaproteobacteria , Betaproteobacteria , Deltaproteobacteria , and Thermoleophilia with two genera each; and Acidobacteria , Anaerolineae , Holophagae , KD4‐96, and Nitrospira with one genus each. Cluster 6 comprised 27 genera of bacteria from 15 distinct classes: the Betaproteobacteria , Deltaproteobacteria , and Alphaproteobacteria , with five, four, and three genera, respectively; the Acidobacteria , Actinobacteria , and Gemmatimonadetes with two genera each; and Acidimicrobiia , JG30‐KF‐CM66, Nitrospira , Phycisphaerae , Planctomycetacia , S‐BQ2‐57 soil group, Spartobacteria , Sphingobacteriia , Thermoleophilia , and TK10 with one genus each. All those microbes are significantly more abundant in the AFS than in the spontaneous forests. Clusters C7 grouped 31 bacteria genera from 10 classes that become increasingly relevant along secondary succession but were also important in the EFA system: Acidobacteria (9), Alphaproteobacteria (8), Actinobacteria (3), Thermoleophilia (3), Gammaproteobacteria (2), Ktedonobacteria (2), Melainabacteria (1), OPB35 soil group (1), Planctomycetacia (1), and Sphingobacteriia (1). Finally, cluster C8 represents the bacteria and the archaea that became more relevant (significant positive coefficients) in the EFA and HA systems, with the only exception of five bacterial genera ( Coxiella , Methylobacterium , Rhodomicrobium , unclassified Rhizobiales , and Byssovorax ) which were abundant only in the EFA systems. Altogether, bacterial community responses generally tracked trends found in the soil factors (Ca and Mg in cluster C6, and soil organic matter in cluster C8) and only 10 bacterial genera were related with changes in twigs biomass (cluster C5). In summary, bacterial community played a major role in the microbiome of the three AFS whereas soil fungal community increased in relative abundance along secondary succession, (Figures and ).
Conditional modeling provides guidelines for agroforestry systems to better mimic the mature forest The joint analysis of the microbiome, the aboveground biomass, and soil physical and chemical characteristics allowed us to simulate scenarios which evaluate how vegetation biomass and soil factors would need to shift under a specific condition (Section ). Since our goal was to investigate the capacity of AFS to mimic the aboveground–belowground interactions found in mature rainforests, we simulated a scenario in which all the different land use systems have the microbial community estimated for the MFs, thus allowing the model to obtain the necessary values of both plant and soil factors required to achieve that condition. This analysis allowed us to identify the site‐specific variables that would need to change in order to attain the MF microbiome. The results from these simulations returned the ratio of changes from each system (Figure ). The secondary forests (OSF, MSF, and YSF) showed the lowest ratios of change. For the OSF, to have the same microbiome of MF, the biggest relative changes would need to occur within the biomass of shrubs with a median increase of 2.3 times the original value, followed by a 1.8‐fold increase in the biomass of plants >10 cm dbh. For the MSF, the microbiome would require more than fourfold increase in the shrubs and dead logs. Finally, for the YSF, the microbiome would require 5.3‐fold increase in shrubs biomass, 4.4‐fold increase in dead logs, and a 4.2‐fold increase in plants >10 cm dbh. Only a small percentage of the secondary forest sites required ratios of changes above 2.5‐fold, 25% of YSF sites for the available P and 36% of YSF for available K, in 25% of the MSF sites available K‐content would need to increase by 5.2 times, also in one OSF site available K would need to increase by 5.1 times. In marked contrast, all three AFS would require very large ratios of change, especially for the plant biomass variables. Our simulation results show that, in order to achieve the MF microbiome, EFA would require 3.7 times more dead logs biomass, as well as 2.6 times more litter mass and shrub biomass. The CPA systems would require similar increases in the dead logs and shrub components (more than 3.2 times) and 2.3 times increase in plants <10 cm dbh. By contrast, the soil factors were less relevant than vegetation parameters along the spontaneous secondary forest succession where most of the median values were close to the ratio of 1. Of the three AFS, the HA exhibited the highest ratios of necessary changes for the soil variables, requiring an increase in the soil nutrients (P, K, Ca, and Mg) for more than 75% of their sites (Figure ). Despite that, for the other AFS (EFA and CPA) the ratios of changes were below twofold for 75% of their sites. Notably, some of the sites should even reduce the availability of soil nutrients such as the soil P content in EFA systems and nearly half of the CPA sites. In summary, the changes in aboveground biomass variables played a major role in allowing the different AFS to reach the microbiome of Amazon MF.
DISCUSSION 4.1 Agroforestry system divergence and spontaneous forest convergence toward the mature forest microbiome We obtained an ecosystem perspective of the effects of agroforestry practices and of secondary forest succession by combining aboveground (plant biomass) with the belowground (soil factors and microbial community composition) components in a generalized joint species attribute model (Clark et al., ). Mature rainforests (MFs) are characterized by high biomass of living aboveground plants (large vegetation and shrubs), but also dead logs, low pH values, high aluminum saturation (H + Al), soil water content, and K‐content. The soil microbiome was mainly composed by fungal groups, but with some bacterial taxa with relevance, as indicated by the cluster C7. Fungal community also played a major role in forming the soil microbiome of all spontaneous secondary forests (YSF, MSF, and OSF). We also observed a secondary succession shifting the coefficients of the secondary forests toward more similarity with the MF. For example, the variable aboveground biomass for the larger plants (bigger than 10 cm dbh, cluster C3) started with a negative coefficient in YSF and MSF but became positive in the OSF system. The aboveground biomass of smaller plants (less than 10 cm dbh, cluster C2) shows similar trends with regression coefficients that become strongly positive from YSF to OSF. Aboveground biomass accumulation along secondary forest succession following shifting cultivation land use has been described in other studies (Jakovac et al., ; Pollini, ). In summary, spontaneous secondary forests become more similar to the mature rainforest ecosystem, but fully recovering the soil microbiome will require longer periods of fallow. On the other hand, the three AFS followed a diverging path when compared with the spontaneous secondary succession. The agroecosystem profile of EFAs, HAs, and CPAs clustered together due to their capacity to promote higher abundance of bacterial communities. The main elements of changes toward a more bacterial‐driven agroecosystem are likely the loss of mid‐sized vegetation (dbh < 10 cm), increasing pH and soil nutrient contents (P, K, Ca, and Mg). Specific management practices in agroforestry (e.g., pruning, weeding, clearing the understory) explain the reduction of biomass of smaller plants and represent the farmer's need to clear area for planting desired trees and crops and managing their access to sunlight. The nutrient inputs regularly applied in CPA systems, as well as nutrient hotspots caused by sweep‐and‐burn in homegardens (Leite et al., ; Winklerprins, ), selected fast‐responding bacteria (Alpha‐, Beta‐, Delta, and Gammaproteobacteria) and likely explains the increased overall abundance of bacteria (Delgado‐Baquerizo et al., ). EFA are the type of agroforestry system with the clearest intention of benefiting from mimicking the secondary succession while providing food, crops, and wood for the farmers. Interestingly, our results showed that this system differed from all spontaneous forests by promoting the abundance of several bacterial groups in the clusters C5–C8. Mulching caused by slash‐and‐mulch (chopping and dropping selected plants in the understory) also explains the positive effects in soil organic matter and the promotion of bacterial groups. Altogether, agroforestry practices created new habitat conditions that fostered the microbial community composed mostly by bacteria and archaea, diverging from those in spontaneous forest soils. Bacteria‐dominated clusters (C5–C8) also reflect the impacts of land use on soil Ca and Mg availability, soil organic matter, and soil carbon stocks. On the other hand, clusters dominated by fungi (C1–C4) grouped together with variables of aboveground biomass. The complexity of the soil bacterial community is primarily governed by soil nutrients, and the fungal community is more strongly associated with variables related to plant aboveground biomass. The increased importance of fungi along secondary succession suggests a crucial role of fungal community in the quick recycling of the nutrients. In the tropical rainforests, trees thrive in deeply weathered and nutrient‐poor soils by accumulating nutrients in their biomass and efficiently cycling them to avoid nutrient loss via leaching and soil erosion (Cuevas & Medina, ). Our findings suggest that fungal communities play a crucial role in nutrient cycling in MF and along secondary forest succession but not in AFS. A further result of our study is the finding that the changes in Glomeromycetes are closely associated with the variables of aboveground biomass (clustered together in C3), and to a lesser degree related with plant‐available topsoil P (present in cluster C4). Arbuscular mycorrhizal fungi are known to strongly affect plant population and community biology and vice versa (Bonfante & Anca, ; Tedersoo et al., ). Our results suggest a stronger codependence between AMF and aboveground biomass rather than between plant biomass and topsoil P availability. These results are likely the outcome of the vegetation's ability to sustain AMF communities, and the capacity of mycorrhizal fungi to access sources of P in the soil that are less available to the plants (Bolan, ; Guo et al., ). Glomerales were more prevalent in mature rainforests than in the AFS, this difference was most pronounced in plantation agroforestry (CPA) systems. We, therefore, confirm the relative importance of AMF to the secondary succession and their reduced relevance for the AFS. We also noticed an increased importance of the bacterial taxa associated with the nitrogen cycle (e.g., Rhizobium , Frankiales , Nitrospira , Nitrospirales 0319‐6A21, and Nitrospirales 4‐29). This is another characteristic in which the AFS diverge from the secondary successional path. Previous studies indicated that along succession the N cycle becomes less relevant and that OSFs and mature rainforest are more P limited than N limited (Davidson et al., , ). The reduced importance of microbes related to the N cycle and the negative regression coefficients for P availability coupled with the increased importance of AMF in the spontaneous secondary forests and mature rainforests suggest the role of N–P trade‐off in determining the ecosystem profile for the Amazon rainforests. Brouwer and Riezebos ( ) highlighted that nitrification becomes a key soil process after logging, which likely explains the increased abundance of nitrogen fixers and nitrite‐oxidizing bacteria as the top responding bacteria to the agroforestry practices (notably, CPA and HA). Therefore, even a small‐scale logging performed in the AFS (e.g., pruning and clearing of the understory) can induce changes in the nutrient cycle and affect the soil microbiome. The increasing land use pressure throughout the tropics does not allow for strategies relying purely on secondary forest succession, and AFS have been identified as a promising alternative land use (Angelsen & Kaimowitz, ; Nair, ). Agroforestry systems provide crops, fruits, and wood with a concomitant increase in agroecosystem complexity (Atangana et al., ) that mimics the structure of native forests (Young, ). The mimicry hypothesis was elaborated by Ewel ( ) and extended by van Noordwijk and Ong ( ), suggesting that AFS are capable of imitating the structure and functions of natural ecosystems, thus benefiting agricultural sustainability. However, our analysis of the soil microbiome reveals that the capacity of AFS to mimic the complex interactions found in mature rainforest is low. As the soil microbiome plays a central role for full maintenance of the ecosystem services sought from forests, AFS should adjust their management practices to strengthen the aboveground–belowground interactions for more sustainable and eco‐efficient land‐use systems. 4.2 Key aspects to better mimic the mature forest With our model‐based approach we were able to determine the management strategies for plant biomass and soil factors that would need to be adjusted in order to speed up recovery toward mature rainforest standard. These new agricultural practices are system and site specific but, in general, involve increasing the aboveground biomass (e.g., dead logs, shrubs, and mid‐sized vegetation Plants <10 cm dbh for CPA; dead logs, shrubs, twigs, and large vegetation—plants >10 cm dbh for EFA) and/or the soil nutrient availability (e.g., P, K, Ca, and Mg for HA). However, the bacterial‐driven microbiome present in the AFS may be difficult to displace, as this would require that key soil factors and plant biomass double or triple to reach the microbiome as in mature rainforests. Consequently, the goal of mimicking of MF may be unattainable for commonly used agroforestry practices, thus posing a potential obstacle in efforts to restore aboveground–belowground interactions (plant biomass and soil factors) and related functionalities. Most agroforestry practices are considered low‐impact land‐use practices that maintain similar or even higher aboveground biomass (Cardozo et al., ). By modeling the soil microbial community jointly with the aboveground biomass and soil factors, we moved beyond the mere identification of the impacts of each land‐use system. Our findings reveal that agroforestry practices reduced the interdependence of the soil microbiome from the vegetation. This may be the result of the reduction of plant–soil interactions caused by agroforestry land management (nutrient inputs, pruning, weeding, etc.), which reflects the efforts in regulating ecosystem productivity toward consumption or market‐related production. Manzoni et al. ( ) showed that chemically too homogeneous plant residues do not promote functionally diverse microbial communities. Selection of agroforestry plant species only based on their cash value could cause AFS to exert detrimental effects on the soil microbiome, for instance by reducing the role of fungi (relative to bacteria) in linking above‐ and belowground ecosystem elements. We also acknowledge the importance of plant diversity in contributing to better mimicking the complex interactions in MF, which goes beyond the scope of our study. Future studies need to jointly model the responses of plant and microbial diversity along secondary forest succession and in AFS. Despite that, our multi‐faced approach suggests that changes in land use, whether agriculturally manipulated or as spontaneous secondary succession after shifting cultivation agriculture cause consistent alterations in the tripartite plant–soil–microbe interactions. Agroforestry system remains as an important alternative to slash‐and‐burn agriculture and previous studies confirmed that they are capable of recovering carbon faster than in spontaneous secondary forests (Cardozo et al., ). Apart from that, all AFS we studied resulted in higher income: cost ratio when compared with slash‐and‐burning agriculture (Cardozo et al., ). Homegarden agroforests have the advantage of maintaining high diversity in rural areas (Mohri et al., ) and species‐rich AFS promote food sovereignty (Armengot et al., ). Enriched fallow agroforests allow farmers to grow crops and food in areas that otherwise would be used for slash‐and‐burn agriculture. Finally, CPAs developed by Japanese immigrants in eastern Amazon represent a case of success in promoting large‐scale and profitable production of agroforests in the Amazon region (Cardozo et al., ). Our findings contribute to improve their agroforestry practices and increase their sustainability via better management of the aboveground–belowground interactions.
Agroforestry system divergence and spontaneous forest convergence toward the mature forest microbiome We obtained an ecosystem perspective of the effects of agroforestry practices and of secondary forest succession by combining aboveground (plant biomass) with the belowground (soil factors and microbial community composition) components in a generalized joint species attribute model (Clark et al., ). Mature rainforests (MFs) are characterized by high biomass of living aboveground plants (large vegetation and shrubs), but also dead logs, low pH values, high aluminum saturation (H + Al), soil water content, and K‐content. The soil microbiome was mainly composed by fungal groups, but with some bacterial taxa with relevance, as indicated by the cluster C7. Fungal community also played a major role in forming the soil microbiome of all spontaneous secondary forests (YSF, MSF, and OSF). We also observed a secondary succession shifting the coefficients of the secondary forests toward more similarity with the MF. For example, the variable aboveground biomass for the larger plants (bigger than 10 cm dbh, cluster C3) started with a negative coefficient in YSF and MSF but became positive in the OSF system. The aboveground biomass of smaller plants (less than 10 cm dbh, cluster C2) shows similar trends with regression coefficients that become strongly positive from YSF to OSF. Aboveground biomass accumulation along secondary forest succession following shifting cultivation land use has been described in other studies (Jakovac et al., ; Pollini, ). In summary, spontaneous secondary forests become more similar to the mature rainforest ecosystem, but fully recovering the soil microbiome will require longer periods of fallow. On the other hand, the three AFS followed a diverging path when compared with the spontaneous secondary succession. The agroecosystem profile of EFAs, HAs, and CPAs clustered together due to their capacity to promote higher abundance of bacterial communities. The main elements of changes toward a more bacterial‐driven agroecosystem are likely the loss of mid‐sized vegetation (dbh < 10 cm), increasing pH and soil nutrient contents (P, K, Ca, and Mg). Specific management practices in agroforestry (e.g., pruning, weeding, clearing the understory) explain the reduction of biomass of smaller plants and represent the farmer's need to clear area for planting desired trees and crops and managing their access to sunlight. The nutrient inputs regularly applied in CPA systems, as well as nutrient hotspots caused by sweep‐and‐burn in homegardens (Leite et al., ; Winklerprins, ), selected fast‐responding bacteria (Alpha‐, Beta‐, Delta, and Gammaproteobacteria) and likely explains the increased overall abundance of bacteria (Delgado‐Baquerizo et al., ). EFA are the type of agroforestry system with the clearest intention of benefiting from mimicking the secondary succession while providing food, crops, and wood for the farmers. Interestingly, our results showed that this system differed from all spontaneous forests by promoting the abundance of several bacterial groups in the clusters C5–C8. Mulching caused by slash‐and‐mulch (chopping and dropping selected plants in the understory) also explains the positive effects in soil organic matter and the promotion of bacterial groups. Altogether, agroforestry practices created new habitat conditions that fostered the microbial community composed mostly by bacteria and archaea, diverging from those in spontaneous forest soils. Bacteria‐dominated clusters (C5–C8) also reflect the impacts of land use on soil Ca and Mg availability, soil organic matter, and soil carbon stocks. On the other hand, clusters dominated by fungi (C1–C4) grouped together with variables of aboveground biomass. The complexity of the soil bacterial community is primarily governed by soil nutrients, and the fungal community is more strongly associated with variables related to plant aboveground biomass. The increased importance of fungi along secondary succession suggests a crucial role of fungal community in the quick recycling of the nutrients. In the tropical rainforests, trees thrive in deeply weathered and nutrient‐poor soils by accumulating nutrients in their biomass and efficiently cycling them to avoid nutrient loss via leaching and soil erosion (Cuevas & Medina, ). Our findings suggest that fungal communities play a crucial role in nutrient cycling in MF and along secondary forest succession but not in AFS. A further result of our study is the finding that the changes in Glomeromycetes are closely associated with the variables of aboveground biomass (clustered together in C3), and to a lesser degree related with plant‐available topsoil P (present in cluster C4). Arbuscular mycorrhizal fungi are known to strongly affect plant population and community biology and vice versa (Bonfante & Anca, ; Tedersoo et al., ). Our results suggest a stronger codependence between AMF and aboveground biomass rather than between plant biomass and topsoil P availability. These results are likely the outcome of the vegetation's ability to sustain AMF communities, and the capacity of mycorrhizal fungi to access sources of P in the soil that are less available to the plants (Bolan, ; Guo et al., ). Glomerales were more prevalent in mature rainforests than in the AFS, this difference was most pronounced in plantation agroforestry (CPA) systems. We, therefore, confirm the relative importance of AMF to the secondary succession and their reduced relevance for the AFS. We also noticed an increased importance of the bacterial taxa associated with the nitrogen cycle (e.g., Rhizobium , Frankiales , Nitrospira , Nitrospirales 0319‐6A21, and Nitrospirales 4‐29). This is another characteristic in which the AFS diverge from the secondary successional path. Previous studies indicated that along succession the N cycle becomes less relevant and that OSFs and mature rainforest are more P limited than N limited (Davidson et al., , ). The reduced importance of microbes related to the N cycle and the negative regression coefficients for P availability coupled with the increased importance of AMF in the spontaneous secondary forests and mature rainforests suggest the role of N–P trade‐off in determining the ecosystem profile for the Amazon rainforests. Brouwer and Riezebos ( ) highlighted that nitrification becomes a key soil process after logging, which likely explains the increased abundance of nitrogen fixers and nitrite‐oxidizing bacteria as the top responding bacteria to the agroforestry practices (notably, CPA and HA). Therefore, even a small‐scale logging performed in the AFS (e.g., pruning and clearing of the understory) can induce changes in the nutrient cycle and affect the soil microbiome. The increasing land use pressure throughout the tropics does not allow for strategies relying purely on secondary forest succession, and AFS have been identified as a promising alternative land use (Angelsen & Kaimowitz, ; Nair, ). Agroforestry systems provide crops, fruits, and wood with a concomitant increase in agroecosystem complexity (Atangana et al., ) that mimics the structure of native forests (Young, ). The mimicry hypothesis was elaborated by Ewel ( ) and extended by van Noordwijk and Ong ( ), suggesting that AFS are capable of imitating the structure and functions of natural ecosystems, thus benefiting agricultural sustainability. However, our analysis of the soil microbiome reveals that the capacity of AFS to mimic the complex interactions found in mature rainforest is low. As the soil microbiome plays a central role for full maintenance of the ecosystem services sought from forests, AFS should adjust their management practices to strengthen the aboveground–belowground interactions for more sustainable and eco‐efficient land‐use systems.
Key aspects to better mimic the mature forest With our model‐based approach we were able to determine the management strategies for plant biomass and soil factors that would need to be adjusted in order to speed up recovery toward mature rainforest standard. These new agricultural practices are system and site specific but, in general, involve increasing the aboveground biomass (e.g., dead logs, shrubs, and mid‐sized vegetation Plants <10 cm dbh for CPA; dead logs, shrubs, twigs, and large vegetation—plants >10 cm dbh for EFA) and/or the soil nutrient availability (e.g., P, K, Ca, and Mg for HA). However, the bacterial‐driven microbiome present in the AFS may be difficult to displace, as this would require that key soil factors and plant biomass double or triple to reach the microbiome as in mature rainforests. Consequently, the goal of mimicking of MF may be unattainable for commonly used agroforestry practices, thus posing a potential obstacle in efforts to restore aboveground–belowground interactions (plant biomass and soil factors) and related functionalities. Most agroforestry practices are considered low‐impact land‐use practices that maintain similar or even higher aboveground biomass (Cardozo et al., ). By modeling the soil microbial community jointly with the aboveground biomass and soil factors, we moved beyond the mere identification of the impacts of each land‐use system. Our findings reveal that agroforestry practices reduced the interdependence of the soil microbiome from the vegetation. This may be the result of the reduction of plant–soil interactions caused by agroforestry land management (nutrient inputs, pruning, weeding, etc.), which reflects the efforts in regulating ecosystem productivity toward consumption or market‐related production. Manzoni et al. ( ) showed that chemically too homogeneous plant residues do not promote functionally diverse microbial communities. Selection of agroforestry plant species only based on their cash value could cause AFS to exert detrimental effects on the soil microbiome, for instance by reducing the role of fungi (relative to bacteria) in linking above‐ and belowground ecosystem elements. We also acknowledge the importance of plant diversity in contributing to better mimicking the complex interactions in MF, which goes beyond the scope of our study. Future studies need to jointly model the responses of plant and microbial diversity along secondary forest succession and in AFS. Despite that, our multi‐faced approach suggests that changes in land use, whether agriculturally manipulated or as spontaneous secondary succession after shifting cultivation agriculture cause consistent alterations in the tripartite plant–soil–microbe interactions. Agroforestry system remains as an important alternative to slash‐and‐burn agriculture and previous studies confirmed that they are capable of recovering carbon faster than in spontaneous secondary forests (Cardozo et al., ). Apart from that, all AFS we studied resulted in higher income: cost ratio when compared with slash‐and‐burning agriculture (Cardozo et al., ). Homegarden agroforests have the advantage of maintaining high diversity in rural areas (Mohri et al., ) and species‐rich AFS promote food sovereignty (Armengot et al., ). Enriched fallow agroforests allow farmers to grow crops and food in areas that otherwise would be used for slash‐and‐burn agriculture. Finally, CPAs developed by Japanese immigrants in eastern Amazon represent a case of success in promoting large‐scale and profitable production of agroforests in the Amazon region (Cardozo et al., ). Our findings contribute to improve their agroforestry practices and increase their sustainability via better management of the aboveground–belowground interactions.
Márcio Fernandes Alves Leite, Flávio Henrique Reis Moraes, Guillaume Xavier Rousseau, and Christoph Gehring designed the research; Márcio Fernandes Alves Leite, Ernesto Gómez Cardozo, Hulda Rocha e Silva, Ronildson Lima Luz, Guillaume Xavier Rousseau, and Karol Henry Mavisoy Muchavisoy conducted the sampling in field; Márcio Fernandes Alves Leite, Ernesto Gómez Cardozo, Hulda Rocha e Silva, and Karol Henry Mavisoy Muchavisoy performed the estimation of plant biomass and the physico‐chemical analysis of the soil; Márcio Fernandes Alves Leite, Binbin Liu, and Eiko Eurya Kuramae performed the molecular laboratory work and analyses; Márcio Fernandes Alves Leite performed the statistical analyses; and Márcio Fernandes Alves Leite, George Kowalchuk, Christoph Gehring, and Eiko Eurya Kuramae wrote the paper. All authors reviewed the manuscript.
The authors declare no competing financial interests.
Figure S1. Click here for additional data file.
|
Sports Concussion and Chronic Traumatic Encephalopathy: Finding a Path Forward
|
80f62072-97da-41cd-8fbc-fcd6b73eea43
|
10108279
|
Forensic Medicine[mh]
|
The neuropathology of CTE was first described in a postmortem series of brains from amateur and professional boxers in 1973, and this report was preceded by decades of clinical observations of boxers who developed various combinations of cognitive, mood, and motor symptoms in the years following their careers. However, most of our modern understanding of CTE began with a 2005 autopsy report of a former professional football player who developed neurobehavioral dysfunction years after his career ended and whose brain disclosed CTE neuropathology. Since then, a series of reports, many from a group at Boston University, , , have contributed further neuropathological data to this field, and CTE has been characterized as a progressive tauopathy resulting from repeated concussions or subconcussive blows. The growing number of cases detected in former athletes whose brains were referred to the Boston group and others has prompted the assertion that CTE is common and vastly underrecognized. Whereas most of the reported cases have been found on postmortem examinations of professional or older athletes with long careers involving contact or collision sports, a few cases have been detected in the brains of young athletes with relatively short exposures to such forces, leading some to conclude that even relatively short spans of such exposures might lead to the characteristic neuropathological lesions of CTE. Clinical study of sports concussion has also proceeded in earnest. Beginning in 2001, for example, a consensus process was developed as the side product of a convening of concussion specialists invited by the International Ice Hockey Federation, the International Olympic Committee, and the International Federation of Football Associations to provide an educational conference for medical professionals engaged in the care of hockey players, Olympic athletes, and soccer players around the globe. The conference organizers saw the opportunity for this highly specialized group of medical experts known as the Concussion in Sport Group (CISG) to reach consensus as to definitions, assessments, and management approaches that have become more detailed and influential over the years. Addressing the issue of CTE in 2017, the 5th CISG gathering in Berlin drew the conclusion that a cause‐and‐effect relationship between exposure to contact sports and CTE had not yet been demonstrated. In contrast, another international group recently applied the Bradford Hill criteria and found “convincing evidence” of a causal relationship between repetitive head impacts and the development of CTE. Strong opinions continue to animate both sides of this controversy. As might be expected from the enormous popularity of American football, the literature on CTE has generated much concern in the general public, and also not unexpectedly, the sports community has been reluctant to acknowledge its existence. Although a great deal has been written about the unintentional head injuries of football, and to a lesser extent hockey, far less attention has been focused on sports in which such blows are intentional, such as boxing and mixed martial arts. One is left to wonder why “sports” in which the objective is to produce brain injury receive less scrutiny, especially given the historical evidence of links between boxing and neurodegenerative disease and death. As an example, although many professional medical organizations had called for an outright ban on boxing in the 1980s, the American Academy of Neurology has since then adopted a more neutral posture, and currently has no official position on the matter. A lingering problem is the inability to diagnose CTE during life. Consensus diagnostic criteria were published in 2021 by an expert panel convened by the National Institute of Neurological Diseases and Stroke that proposed criteria for the clinical diagnosis of “traumatic encephalopathy syndrome” (TES) in patients whose neurobehavioral deterioration following exposure to repetitive head impacts could not be explained by any other health condition. This consensus statement attempted to establish a useful approach to the evaluation and care of those whose clinical condition could represent CTE, but the fact remains that the diagnosis can only be made postmortem.
Exposures to TBI correlate with increased risk of dementia and higher rates of disability, potentially affecting a large proportion of the population, although these associations have not yet been specifically explored with large‐scale studies for underlying neuropathology. , Although some may consider CTE a possible explanation, several challenges arise in answering how CTE impacts the general public. For instance, autopsy rates in the United States have fallen steadily in recent years, now to <10% even including forensic autopsies. Moreover, brains are not usually examined for CTE at autopsy. In the forensic setting, the majority of brains are only examined grossly and in the fresh state. Routine, nonforensic autopsy brains, particularly in academic centers, are often examined both grossly and microscopically, but the testing necessary to diagnose CTE—immunohistochemistry for pTau—is only performed as indicated. The only environments where the requisite testing routinely occurs are research brain repositories, mostly those concerned with aging and neurodegeneration, where substantial collection biases exist. A few studies have nevertheless provided a glimpse into the prevalence of CTE in the community at large. A 2021 study reported on 532 brains of elderly individuals from the general public donated to a neurodegenerative brain bank, including 107 cases with remote mTBI and loss of consciousness. There were only 3 cases of CTE (0.6%), none of which, interestingly, was among the 107 mTBI cases. In an earlier study from a similar brain bank, CTE was reported in 21 of 66 elderly former contact sport athletes (32%), and in none of 198 controls. Finally, a study of 225 military brains found a CTE rate of 4.4%, despite high rates of both military and civilian mTBI in the decedents. These studies suggest that in community populations (as opposed to cohorts selected for repetitive mTBI such as contact sports athletes), mTBI‐related disability, including dementia, is not widely attributable to CTE, at least by its present definition. CTE is currently diagnosed by using pTau immunohistochemistry to identify a pathognomonic lesion characterized by pTau accumulation within neurons in a perivascular distribution at the depths of cortical sulci (Fig ). An important point is that sulcal depth‐ and perivascular‐predominant pTau pathology is not a pattern observed in other tauopathies. Also noteworthy is that identification of one lesion is presently sufficient to diagnose CTE. This practice could be criticized, as a single microscopic lesion may not reflect any symptomatology, but a similar diagnostic approach is used in many neurodegenerative diseases that also have minimal neuropathological criteria regardless of clinical symptoms. However, for well‐established neurodegenerative diseases, principally Alzheimer disease, clinically validated neuropathological staging criteria allow for the correspondence of pathological severity with likelihood of dementia. The development and validation of these criteria required decades of prospective research. By comparison, the state of knowledge of CTE is evolving, and the clinical validation of proposed neuropathological staging criteria will take many years. Therefore, a diagnosis of CTE does not imply clinical manifestations, and we do not know the extent of neuropathology needed to correlate with the likelihood of symptoms. Regarding a dose–response relationship with mTBI and CTE risk, it is established that repeated impact mTBI is associated with CTE development, and duration and level of play in contact sports may also be influential. However, the precise “dose” of mTBI necessary to trigger CTE neuropathology remains unknown. Although most cases are documented in those with longstanding mTBI exposure, such as elite athletes, CTE has been found in amateur athletes with less exposure, and even in association with a single documented mTBI. Individual factors, such as genetic predispositions, may play a role, but discovery of these has been elusive. Questions regarding alcohol, drug use, and heart disease in CTE pathogenesis have emerged, but no evidence supports an etiologic connection between these factors and the pathognomonic lesion of CTE.
To neurologists, CTE presents a conundrum in many respects. As mentioned above, the diagnosis presents a major challenge, as the consensus criteria for a corresponding clinical disorder of TES do not necessarily identify CTE, and conventional neuroimaging cannot be relied on to disclose any specific findings. The imaging of tau in the living brain with positron emission tomography has much promise for improving clinical diagnosis, but tau binding has thus far been nonspecific and not diagnostically useful. Also potentially helpful may be blood and cerebrospinal fluid biomarkers such as tau and neurofilament light, but these are still investigational; plasma total tau, for example, is elevated after subconcussive head impacts in college football players, but it is uncertain whether it correlates with the severity or number of impacts. Another gap in knowledge is what types of head injury may be associated with CTE, an issue highlighted by the proposal of subconcussion, an event presumably caused by a head blow less forceful than that which causes concussion, but lacking a precise definition. The prevalence of CTE is also unknown; whereas published cases now number in the hundreds, it is not clear how many people with repeated concussions would not have the disease if examined at autopsy. An acquisition bias thus exists such that we have a numerator of identified cases, but not a denominator based on the population at risk. In this regard, it is of considerable interest that CTE appears to be uncommon in military personnel who have also had repeated mTBI. Perhaps the most difficult clinical problem is that the wide range of reported clinical features (such as cognitive impairment, depression, and motor dysfunction) can suggest to previously concussed people that they have CTE when many other explanations may be more plausible. Thus the propensity to engage in self‐diagnosis may seriously hinder the seeking and obtaining of care that can be helpful even if CTE is not the cause of symptoms.
A wealth of evidence supports the notion that physical trauma to the brain can have deleterious effects on cognition, mood, and motor function, and it is highly probable that multiple blows to the head are more harmful than one alone. Although the currently available, albeit limited, population data may suggest that CTE is uncommon relative to the number of individuals who sustain repeated mTBI, we tentatively conclude that repeated impact mTBI may lead to the neuropathology of CTE in some individuals. Many questions, however, remain unanswered. Although some data support a dose–response relationship between repeated mTBI and CTE, we do not yet know the extent of mTBI that is necessary to predict a high likelihood of CTE development. The neuropathology of CTE, especially when minimal, can also exist in the absence of symptoms, and thus we also do not know the extent of CTE neuropathology necessary to predict clinical manifestations. In the future, we look forward to detailed longitudinal investigation of people with repeated impact mTBI, using clinical evaluation combined with neuroimaging and fluid biomarkers, to enable accurate in vivo diagnosis and follow‐up of CTE based on solid neuroscientific evidence. We can then hope that the information so disclosed may lead to prevention of, and effective treatments for, this emerging neurobehavioral problem.
All authors contributed to the conceptualization and writing of this article.
The information/content, conclusions, and/or opinions expressed herein do notnecessarily represent the official position or policy of, nor should anyofficial endorsement be inferred on the part of, Uniformed Services University, the Department of Defense, the US Veterans Administration, the U.S. Government orthe Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc.
Nothing to report.
|
Effect of combining UV‐C irradiation and vacuum sealing on the shelf life of fresh strawberries and tomatoes
|
2b1725f0-2007-4c89-9163-f9b4b1f9180d
|
10108318
|
Microbiology[mh]
|
INTRODUCTION Food loss and waste are among the most pressing global issues. They are commonly regarded as a significant obstacle to global sustainability, and they significantly affect the global economy; according to the Food and Agriculture Organization (FAO), worldwide food waste cost is estimated at $750 billion (McGuire, ; Xue et al., ). The United Nations estimated that food waste produces over 3 million tonnes 2 of greenhouse gases (GHGs), representing 11% of all GHGs (FAO, ; Scherhaufer et al., ). Food loss and waste occur for various reasons across the supply chain, some occurs due to microbial infection, which can cost up to 30% of a crop's overall yield, while in the other hand farmers, sellers, and processors would discard food that they expect to be undesirable due to consumers’ perceptions of product quality (Papargyropoulou et al., ). The quality deterioration of fresh produce can occur in a variety of ways during the growing, handling, harvesting, and transportation processes, and even after purchase by consumers or service providers. Nevertheless, the main causes of loss and waste are microbial infections, improper handling and packaging, ineffective storage systems, insufficient on‐farm storage facilities, and harsh weather conditions (Joardder & Masud, ; Saeed et al., ). Globally, numerous food items are increasingly being sold in regions that are geographically far from their origins; therefore, there is a growing demand for a longer food shelf life and advanced food preservation methods that can be used during storage and transportation (Rawat, ). Fragile produce such as tomatoes and strawberries are more prone to postharvest deterioration in comparison to other produce due to their delicate textures and high moisture content. Tomatoes and strawberries are among the most consumed produce globally (Saeed et al., ; Xue et al., ). However, they generate high loss and waste volumes across various points in the supply chain due to their high perishability. The postharvest tomato wastage in Europe is estimated at 3 million metric tons per year. In Australia, yearly tomato crop losses were estimated to be between 27% and 36% (Løvdal et al., ). Similarly, in‐field food loss statistics for tomatoes in Florida showed an average loss of 40% of the crop (Thorsen et al., ). On the other hand, fresh strawberries are one of the most popular fruits in the world and are in great demand due to their flavor, high nutritional content, and range of health advantages, including anti‐inflammatory, anticancer, and antioxidant properties (Shahbazi et al., ). However, fresh strawberries are highly perishable with short postharvest life mainly due to high respiration rate, excessive soft texture, sensitivity to temperature, water loss, microbiological decay, and mechanical injury and vibrations, which makes their marketing a challenge (Shahbazi, ). Strawberry waste happens at every stage of the life cycle, including production, distribution, retail, and household handling. This amounts to an estimated 640 million pounds of strawberries lost, with a market value of $1.4 billion (Kessler, ). Several food preservation methods have been developed in the recent years to extend the shelf life of perishables and reduce the waste. Edible coatings have been extensively studied as a natural means of controlling the growth of microorganisms and extending the shelf life of fresh produce. For instance, gum arabic (a polysaccharide) and mango kernel starch have been used as edible coatings for extending the shelf life of tomatoes, where both coating materials increased the shelf life of tomato by up to 20 days at a storage temperature of 20°C (Ali et al., ; Nawab et al., ). Several edible coating materials have also shown positive results and improved the shelf life of strawberries under cold storage (Pinzon et al., ; Saleem et al., ). However, although edible coatings can enhance the shelf life of produce, their poor barrier properties and the unappealing flavor are among the main disadvantages of this method (Duguma, ). Ultraviolet irradiation is considered a highly effective method for prolonging the shelf life of perishables (Delorme et al., ). The wavelength region of UV‐C light (200–280 nm) is known for its germicidal effects, as it damages the DNA of pathogenetic microorganisms, affecting their metabolism and reproduction and ultimately resulting in cell death (Brem et al., ; Gayán et al., ). Various researchers have demonstrated the positive effect of UV‐C radiation in limiting microbial growth and increasing the shelf life of perishables (Allende et al., ; Gogo et al., ; González‐Aguilar et al., ; Khan & Kaneesamkandi, ; Liu et al., ; Manzocco et al., ; Pinheiro et al., ; Rabelo et al., ; Rodoni et al., ). A recent study (Araque et al., ) reported that a UV‐C irradiation dose of 4 KJ/cm 2 can be useful to extend the shelf life of fresh‐cut strawberries (stored at 4°C) for up to 7 days. Furthermore, fresh strawberries were subjected to UV‐C irradiation at lower doses of 0.8–4 KJ/m 2 , and results revealed that the treatment increased the shelf life of strawberry samples for up to 13 days while storing them at 0°C (Araque et al., ). The shelf life of tomatoes can also be increased through UV‐C treatment. Pataro et al. ( ) found that the shelf life of fresh tomatoes can be increased for up to 21 days when a UV‐C irradiation of 1−8 J/cm 2 is applied at a storage temperature of 20°C. Similarly, another study (Pinheiro et al., ) reported that the shelf life of tomatoes could be increased by up to 15 days after the treatment of fruit with UV‐C light (0.32−4.83 kJ/m 2 at 254 nm). However, despite the spoilage delay after UV‐C irradiation, the long storage time can affect the sensory characteristics of food and cause tissue softening, browning, aroma deterioration, and weight loss (Allende et al., ; Rodoni et al., ). Vacuum sealing is another widely used method for food preservation due to its ability to extend the shelf life of perishables, decreasing the weight loss and browning index of fresh produce while retaining its firmness (Moradinezhad & Dorostkar, ; Moradinezhad et al., ; Othman et al., ). Vacuum packaging can increase the shelf life of strawberries by 1−3 days (Putri et al., ). It has also been reported to increase the shelf life of tomatoes up to 21 days at a storage temperature of 4°C while maintaining the fruit's quality (phenolic and carotenoid content) and vitamin C content (Odriozola‐Serrano et al., ). The sole use of UV‐C irradiation or vacuum sealing has been widely studied and applied in various food industries; however, the effect of the combination of both preservation methods has not been studied yet. Therefore, this paper investigates the effectiveness of combining UV‐C irradiation and vacuum sealing in extending the shelf life of fruits in comparison to the sole use of UV‐C irradiation or vacuum sealing. Tomatoes and strawberries were chosen for this research due to their high market demand, high monetary value, and high perishability.
MATERIALS AND METHODS 2.1 Sample preparations Whole tomatoes and strawberries that are free from external defects were purchased from a local supermarket (Thuwal, Saudi Arabia). The strawberries were supplied by Driscoll's Inc. (Watsonville, CA, USA), and the tomatoes were supplied by Mahasil Agriculture Company Co. (Unayzah, Saudi Arabia). All samples were washed using tap water and dried using napkins. Whole strawberries were used for the shelf‐life experiments, whereas the tomatoes were cut into four equal quarters. 2.2 Experimental setups Every experiment consisted of eight samples: four strawberry samples, each weighing 100 g, and four quartered tomato samples, each weighing 200 g. All the samples were stored in 1.4‐L plastic containers made of Eastman Tritan PCTG TX1001 at a temperature of 4°C and a relative humidity level of 60%. To investigate the effectiveness of the combination of the UV‐C irradiation and vacuum sealing in extending the shelf life of the whole strawberry and quartered tomato samples, the physiochemical characteristics of the fruits were examined while storing the samples under four different conditions: An anaerobic and sterilized environment created by UV‐C irradiation and vacuum sealing (UV‐C and vacuum); An aerobic sterilized environment created by UV‐C irradiation (UV‐C only); An anaerobic environment created by vacuum sealing the storage container (vacuum only); A normal aerobic environment (control). Four UV‐C lamps (253.7 nm, 2G11, 18 W; Philips, Shanghai, China) were mounted on movable racks from the right, left, top, and bottom sides of the storage container. Each lamp was turned on for 10 min prior to exposure to stabilize the wavelength at 253.7 nm. The distance between the samples and each lamp was 2 cm, and the exposure time was 30 s. The samples received a UV‐C light dose of 360 J/m 2 , with the UV‐C light intensity measured using an ultraviolet meter (Zenith, Atlantic Ultraviolet Corporation, New York, USA). The vacuum‐sealed samples were maintained at a reduced pressure of 40 kPa using a 12‐V oxygen vacuum pump (JP1). As determined by the AR8100 oxygen sensor, the oxygen levels in the vacuumed containers range from 1.7% to 1.8%, whereas those in the unvacuumed containers range from 20.7% to 21%. The strawberry and tomato shelf‐life experiments were repeated three times using different samples that were harvested at different times to increase the accuracy of the shelf‐life estimation. 2.3 Shelf life and quality examination 2.3.1 Organoleptic analysis Color and appearance, flavor (taste and aroma), and texture are the main distinguishing sensory characteristics of fruits and vegetables that influence the consumers’ purchase and consumption decisions (Barrett et al., ). Therefore, these characteristics of all experimental samples were examined daily to monitor the changes in color, wilting, smell, texture, and appearance to identify spoilage. A group of 12 untrained panelists that consist of six females and six males with an age range of 21–41 years have evaluated the sensory characteristic of the samples every 3 days during the experiment period. The panelists examined the qualitative traits of strawberries and tomatoes, which are the color, texture, taste, aroma, and overall acceptance. The evaluation was provided on a scale from 1 to 9, where 1 is the lowest score that indicates undesirable sensory traits and 9 is the highest score that indicates attractive sensory traits. For instance, the score of 9 for strawberry and tomato implies an attractive red shell color, firm pulp texture (as resistance to finger pressure), extremely fresh flavor, and very fresh aroma, while the score of 1 implies very soft pulp, very dark or brownish shell color, soft shell, slimy texture, off‐taste, and off‐aroma. 2.3.2 Weight loss The weight loss of each sample compares the weight at the first day of storage and the last day prior to spoilage. The weight loss percentages were calculated as follows (Hosseini et al., ): Weight loss ( % ) = Initial weight − Final weight Initial weight × 100 . 2.3.3 Microbial analysis Aerobic mesophilic bacteria and Enterobacteriaceae, Pseudomonas sp., and lactic acid bacteria were counted for every sample by applying Plate Count Agar (PCA) and Violet Red Bile Glucose agar (VRBG), Pseudomonas CFC Agar Base, and De Man, Rogosa and Sharpe agar (MRS), respectively. Dichloran Rose‐Bengal Chloramphenicol agar (DRBC) was used to enumerate yeast and mold. The work area and tools were cleaned with 70% alcohol. Thereafter, 5 g from each sample was taken for the serial dilutions and placed in stomacher bags, and 45 ml of sterile physiological solution was then added and homogenized in the stomacher for 1 min to obtain the initial suspension (10 −1 dilution). For the second dilution (10 −2 ), a sterile pipette was used to take 1 ml from the first dilution and added to 9 ml of the physiological solution, and the same process was repeated for additional dilutions. The microbial loads were enumerated by spread‐plating 100 µl of each dilution into the PCA, DRBC, and CFC and pour‐plating 1 ml of each dilution into VRBG and MRS. The Petri dishes of the aerobic mesophilic bacteria, Enterobacteriaceae, Pseudomonas sp., and lactic acid bacteria were incubated for 24 h at 37°C, while the yeast and mold Petri dishes were incubated for 5 days at 25°C. The microbial populations were detected on separate days during the experimental period, and the results were captured in the form of log of colony‐forming units per gram (log CFU/g) (Hosseini et al., ). 2.3.4 PH measurements The pH levels were measured by taking 2.5g from each sample and mixing it with an equal amount of distilled water as stated in the ISO 1842:1991 standards. Thermo Scientific Orion 5 Star Benchtop Multiparameter Meters was used for measuring the pH.
Sample preparations Whole tomatoes and strawberries that are free from external defects were purchased from a local supermarket (Thuwal, Saudi Arabia). The strawberries were supplied by Driscoll's Inc. (Watsonville, CA, USA), and the tomatoes were supplied by Mahasil Agriculture Company Co. (Unayzah, Saudi Arabia). All samples were washed using tap water and dried using napkins. Whole strawberries were used for the shelf‐life experiments, whereas the tomatoes were cut into four equal quarters.
Experimental setups Every experiment consisted of eight samples: four strawberry samples, each weighing 100 g, and four quartered tomato samples, each weighing 200 g. All the samples were stored in 1.4‐L plastic containers made of Eastman Tritan PCTG TX1001 at a temperature of 4°C and a relative humidity level of 60%. To investigate the effectiveness of the combination of the UV‐C irradiation and vacuum sealing in extending the shelf life of the whole strawberry and quartered tomato samples, the physiochemical characteristics of the fruits were examined while storing the samples under four different conditions: An anaerobic and sterilized environment created by UV‐C irradiation and vacuum sealing (UV‐C and vacuum); An aerobic sterilized environment created by UV‐C irradiation (UV‐C only); An anaerobic environment created by vacuum sealing the storage container (vacuum only); A normal aerobic environment (control). Four UV‐C lamps (253.7 nm, 2G11, 18 W; Philips, Shanghai, China) were mounted on movable racks from the right, left, top, and bottom sides of the storage container. Each lamp was turned on for 10 min prior to exposure to stabilize the wavelength at 253.7 nm. The distance between the samples and each lamp was 2 cm, and the exposure time was 30 s. The samples received a UV‐C light dose of 360 J/m 2 , with the UV‐C light intensity measured using an ultraviolet meter (Zenith, Atlantic Ultraviolet Corporation, New York, USA). The vacuum‐sealed samples were maintained at a reduced pressure of 40 kPa using a 12‐V oxygen vacuum pump (JP1). As determined by the AR8100 oxygen sensor, the oxygen levels in the vacuumed containers range from 1.7% to 1.8%, whereas those in the unvacuumed containers range from 20.7% to 21%. The strawberry and tomato shelf‐life experiments were repeated three times using different samples that were harvested at different times to increase the accuracy of the shelf‐life estimation.
Shelf life and quality examination 2.3.1 Organoleptic analysis Color and appearance, flavor (taste and aroma), and texture are the main distinguishing sensory characteristics of fruits and vegetables that influence the consumers’ purchase and consumption decisions (Barrett et al., ). Therefore, these characteristics of all experimental samples were examined daily to monitor the changes in color, wilting, smell, texture, and appearance to identify spoilage. A group of 12 untrained panelists that consist of six females and six males with an age range of 21–41 years have evaluated the sensory characteristic of the samples every 3 days during the experiment period. The panelists examined the qualitative traits of strawberries and tomatoes, which are the color, texture, taste, aroma, and overall acceptance. The evaluation was provided on a scale from 1 to 9, where 1 is the lowest score that indicates undesirable sensory traits and 9 is the highest score that indicates attractive sensory traits. For instance, the score of 9 for strawberry and tomato implies an attractive red shell color, firm pulp texture (as resistance to finger pressure), extremely fresh flavor, and very fresh aroma, while the score of 1 implies very soft pulp, very dark or brownish shell color, soft shell, slimy texture, off‐taste, and off‐aroma. 2.3.2 Weight loss The weight loss of each sample compares the weight at the first day of storage and the last day prior to spoilage. The weight loss percentages were calculated as follows (Hosseini et al., ): Weight loss ( % ) = Initial weight − Final weight Initial weight × 100 . 2.3.3 Microbial analysis Aerobic mesophilic bacteria and Enterobacteriaceae, Pseudomonas sp., and lactic acid bacteria were counted for every sample by applying Plate Count Agar (PCA) and Violet Red Bile Glucose agar (VRBG), Pseudomonas CFC Agar Base, and De Man, Rogosa and Sharpe agar (MRS), respectively. Dichloran Rose‐Bengal Chloramphenicol agar (DRBC) was used to enumerate yeast and mold. The work area and tools were cleaned with 70% alcohol. Thereafter, 5 g from each sample was taken for the serial dilutions and placed in stomacher bags, and 45 ml of sterile physiological solution was then added and homogenized in the stomacher for 1 min to obtain the initial suspension (10 −1 dilution). For the second dilution (10 −2 ), a sterile pipette was used to take 1 ml from the first dilution and added to 9 ml of the physiological solution, and the same process was repeated for additional dilutions. The microbial loads were enumerated by spread‐plating 100 µl of each dilution into the PCA, DRBC, and CFC and pour‐plating 1 ml of each dilution into VRBG and MRS. The Petri dishes of the aerobic mesophilic bacteria, Enterobacteriaceae, Pseudomonas sp., and lactic acid bacteria were incubated for 24 h at 37°C, while the yeast and mold Petri dishes were incubated for 5 days at 25°C. The microbial populations were detected on separate days during the experimental period, and the results were captured in the form of log of colony‐forming units per gram (log CFU/g) (Hosseini et al., ). 2.3.4 PH measurements The pH levels were measured by taking 2.5g from each sample and mixing it with an equal amount of distilled water as stated in the ISO 1842:1991 standards. Thermo Scientific Orion 5 Star Benchtop Multiparameter Meters was used for measuring the pH.
Organoleptic analysis Color and appearance, flavor (taste and aroma), and texture are the main distinguishing sensory characteristics of fruits and vegetables that influence the consumers’ purchase and consumption decisions (Barrett et al., ). Therefore, these characteristics of all experimental samples were examined daily to monitor the changes in color, wilting, smell, texture, and appearance to identify spoilage. A group of 12 untrained panelists that consist of six females and six males with an age range of 21–41 years have evaluated the sensory characteristic of the samples every 3 days during the experiment period. The panelists examined the qualitative traits of strawberries and tomatoes, which are the color, texture, taste, aroma, and overall acceptance. The evaluation was provided on a scale from 1 to 9, where 1 is the lowest score that indicates undesirable sensory traits and 9 is the highest score that indicates attractive sensory traits. For instance, the score of 9 for strawberry and tomato implies an attractive red shell color, firm pulp texture (as resistance to finger pressure), extremely fresh flavor, and very fresh aroma, while the score of 1 implies very soft pulp, very dark or brownish shell color, soft shell, slimy texture, off‐taste, and off‐aroma.
Weight loss The weight loss of each sample compares the weight at the first day of storage and the last day prior to spoilage. The weight loss percentages were calculated as follows (Hosseini et al., ): Weight loss ( % ) = Initial weight − Final weight Initial weight × 100 .
Microbial analysis Aerobic mesophilic bacteria and Enterobacteriaceae, Pseudomonas sp., and lactic acid bacteria were counted for every sample by applying Plate Count Agar (PCA) and Violet Red Bile Glucose agar (VRBG), Pseudomonas CFC Agar Base, and De Man, Rogosa and Sharpe agar (MRS), respectively. Dichloran Rose‐Bengal Chloramphenicol agar (DRBC) was used to enumerate yeast and mold. The work area and tools were cleaned with 70% alcohol. Thereafter, 5 g from each sample was taken for the serial dilutions and placed in stomacher bags, and 45 ml of sterile physiological solution was then added and homogenized in the stomacher for 1 min to obtain the initial suspension (10 −1 dilution). For the second dilution (10 −2 ), a sterile pipette was used to take 1 ml from the first dilution and added to 9 ml of the physiological solution, and the same process was repeated for additional dilutions. The microbial loads were enumerated by spread‐plating 100 µl of each dilution into the PCA, DRBC, and CFC and pour‐plating 1 ml of each dilution into VRBG and MRS. The Petri dishes of the aerobic mesophilic bacteria, Enterobacteriaceae, Pseudomonas sp., and lactic acid bacteria were incubated for 24 h at 37°C, while the yeast and mold Petri dishes were incubated for 5 days at 25°C. The microbial populations were detected on separate days during the experimental period, and the results were captured in the form of log of colony‐forming units per gram (log CFU/g) (Hosseini et al., ).
PH measurements The pH levels were measured by taking 2.5g from each sample and mixing it with an equal amount of distilled water as stated in the ISO 1842:1991 standards. Thermo Scientific Orion 5 Star Benchtop Multiparameter Meters was used for measuring the pH.
RESULTS AND DISCUSSION 3.1 Weight loss and sensory characteristics evaluation The weight loss is an indication of dehydration, decomposition, and overall quality decay, which directly contribute to the shelf‐life deterioration of fresh produce. All experimental samples in this study decayed throughout the storage period as shown in Figure ; however, the weight loss percentages were influenced by the storage condition. As shown in the figure, the samples that were stored in the UV‐C and vacuum condition lost less weight in comparison to other samples that were either irradiated with UV‐C light only or only vacuum sealed in all the experimental iterations. The vacuum storage condition came second in terms of decay, followed by the UV‐C condition, while the control samples experienced a relatively larger decay. The weight loss is associated with a higher polyphenol oxidase (PPO) enzyme reaction, which stimulates the oxidation of phenolic compounds in fresh produce and consequently affects the firmness and color of the produce (Xu et al., ; Yoon et al., ). The low oxygen levels in the vacuumed samples limited the PPO activity and resulted in reduced weight loss. For the UV‐C‐irradiated samples, the weight loss reduction is attributed to the effect of the UV‐C irradiation on the inactivation of microorganisms, as limiting their growth reduces damage to the walls and tissues, maintains the firmness, and limits the weight reduction of the produce (Hosseini et al., ). In general, the weight loss in fresh produce is a result of the transpiration (moisture loss) and respiration (carbon loss) processes. Hence, prolonging the shelf life of perishables can be achieved by limiting the transpiration and respiration rates, which are associated with various factors including storage temperature and humidity levels, postharvest handling processes, and transportation conditions. However, since moisture passes through the skin of the fruits and vegetables before it evaporates, the transpiration rate is directly proportional to skin permeability. Skin permeability is a function of the skin mass transfer coefficient—an indicator of the skin's porosity level, which defines its resistance to the moisture passage (Becker & Fricke, ). Chau et al. ( ) and Gan and Woods ( ) have experimentally defined the skin mass transfer coefficients of various fresh produce: the mean coefficients of strawberries and tomatoes are 13.6 and 10.1, respectively, which justifies the higher weight loss values of the strawberry samples in comparison to the tomato samples in the present study. 3.2 Microbial analysis The microbial populations were quantified for all the strawberry and tomato samples on different days to study the effect of each storage condition on the microbial growth rate, which influences the alteration of the sensory characteristics of the samples. The population growth of Pseudomonas sp. and yeast and mold is shown in Figure for the strawberry samples and in Figure for the tomato samples. The growth of Pseudomonas sp. was very slow in the first days for all samples, but it abruptly increased during the last few days. It was observed that the growth rates of Pseudomonas sp. were inflected by the storage environment and the chemistry of each fruit. For instance, it was noticed that the growth of Pseudomonas sp. in the strawberry samples that were stored in aerobic environments (control and UV‐C) is higher than the growth in the samples stored in anaerobic environments (Vacuum and UV‐C & Vacuum) as shown in Figure . In contrast, the Pseudomonas growth in the quartered tomato samples was higher in the anaerobic storage environments and lower in the aerobic storage environments (Vacuum and UV‐C & Vacuum) as shown in Figure . This difference in the growth rates between the storage conditions is attributed to the characteristics of Pseudomonas . The majority of Pseudomonas sp. are obligate aerobes, where oxygen is used as a terminal electron acceptor (Robinson, ). Yet, some Pseudomonas sp. species can grow anaerobically upon the availability of nitrite that can be also used as a terminal electron acceptor (Robinson, ; Schaechter, ). In fresh tomato fruits, the nitrate levels range between 0.93 and 66.54 with an average of 12.55 ± 0.002 (mg/kg FW ± SE) (MirMohammad‐Makki & Ziarati, ). Therefore, the high growth of Pseudomonas sp. in the tomato samples that were stored in anaerobic environments was stimulated by the high nitrite content in the fruit itself. As for the yeast and mold populations, a reduction in the growth rates was observed in all the tomato and strawberry samples that were irradiated using UV‐C light, whereas the slowest growth rate was observed in the UV‐C and vacuum storage condition, as presented in Figures and . A different growth behavior was observed in the strawberry samples of run 3, where the UV‐C‐irritated sample expired before the vacuumed sample, which could be attributed to the potential difference in the initial microbial content of each sample. The growth rates of yeast and mold, and Pseudomonas sp. were compared to the changes in the sensory evaluation of the samples during their shelf life. Noticeable increases in the microbial populations were observed on the days when organoleptic spoilage was identified as shown in Figures and . The population of yeast and mold that indicates the spoilage of strawberries and tomatoes ranged from 21 × 10 4 to 25 × 10 4 log CFU/g and from 21 × 10 4 to 24 × 10 4 log CFU/g, respectively. Since these microbial population levels were reached on later days for all the UV‐C and vacuum samples, this storage condition proved its positive effect on maximizing the shelf life of strawberries and tomatoes in comparison to the other storage conditions. This positive effect is mainly due to the combined effect of the (1) UV‐C irradiation, which damages the DNA of microorganisms and affects their metabolism and reproduction, ultimately resulting in cell death (Brem et al., ; Gayán et al., ), in addition to the (2) vacuum sealing that limits the oxygen level necessary for the metabolism and growth of microorganisms (Shajil et al., ). The oxygen levels in the storage containers after the vacuum sealing range between 1.7% and 1.8%, while the oxygen levels in the unvacuumed containers range between 20.7% and 21%. Since the oxygen level plays a pivotal role in microbial growth, it is important to study the packaging parameters that can influence the oxygen level, such as the Oxygen Transmission Rate (OTR), the Carbon Dioxide Transmission Rate (CTR), the Water Vapor Transmission Rate (WVTR), and the oxygen and water permeabilities. Hence, our future research can focus on studying the transmission rates, the permeabilities, and their correlation with the shelf life of food. In general, eliminating oxygen from the containers creates oxygen and vapor barriers, which impede the undesirable oxidative reactions in the food during storage. Yet, the anaerobic storage condition is a favorable environment for some bacteria, such as Clostridium botulinum , which thrives in the low‐oxygen conditions and produces harmful toxins. But since the ideal temperature range for Clostridium botulinum growth is 20–37°C, its growth will be limited as the samples were stored at 4°C (Fields et al., ; Tanner & Oglesby, ). Aside from the yeast and mold counts, the populations of aerobic bacteria, lactic acid bacteria, and Enterobacteriaceae were examined and found to be too few to count in all the tomato and strawberry samples. This low count is due to the natural acidity (low pH) of tomatoes and strawberries, which creates an unfavorable environment for the growth of many spoilage microorganisms, especially bacteria (Barth et al., ). However, the low pH environment is suitable for the growth of yeast and mold (Petruzzi et al., ). Therefore, the increase in the yeast and mold colonies has been inferred to be the main cause of the degradation in the sensory characteristics of strawberries and tomatoes. 3.3 PH measurements The pH levels of tomato and strawberry samples increased during the storage period, where the slowest increase rate was observed in the UV‐C & Vacuum samples as shown in Figure . The pH levels in the UV‐C and vacuum samples of strawberry ranged between 0.25 and 0.29 as shown in Figure , while pH levels of the tomato samples ranged between 0.20 and 0.24 on the last days as indicated in Figure . On the other side, the fastest rate of pH increase was observed in the control samples, which implies the reduction of acidity level in the fruits (Mgaya‐Kilima et al., ). These results are on agreement with other research that showed as increase in the pH levels throughout the storage period of strawberry and tomatoes (Caner et al., ; García et al., ). 3.4 Organoleptic examination Spoilage is indicated by a variety of sensory cues, such as off‐colors, off‐odors, and the softening of vegetables and fruits. For the strawberry samples, the spoilage was determined based on the color change to dark red, the appearance of soft brown spots, wilting, seed disappearance (softness), and the appearance of white fungal growth. The spoilage was observed between day 8 and 10 for the control samples, between day 13 and 15 for the vacuum‐sealed samples, and between day 13 and 17 for the UV‐C‐irradiated samples. The samples that were vacuum sealed and irradiated with UV‐C light remained intact for a longer period of time and spoiled between day 19 and 20, as illustrated in Figure . On the other hand, the spoilage of the quartered tomato samples was determined based on the appearance of black spots, small green and white mold growth, a mushy texture, fluid leakage, or a foul odor. Generally, the odor of UV‐C & Vacuum samples was the strongest followed by the Vacuum samples. The color alteration was slight in the UV‐C & Vacuum samples and Vacuum samples, while it was more noticed in UV‐C samples. These changes were observed in the control samples between days 13 and 15, in the vacuum‐sealed sample between days 15 and 17, and in the UV‐C‐irradiated samples between day 17 and 20. The UV‐C and vacuum samples remained intact for a longer time; small black spots and fluid leakage were noticed between day 20 and 23, as indicated in Figure . The shelf‐life results that were determined through the organoleptic analysis align with those determined via the microbial population quantification of yeast and mold and Pseudomonas sp. that are illustrated in Figures and . 3.5 Sensory evaluation The consumer acceptance of perishables is greatly influenced by the sensory qualities. The effects of UV‐C irradiation and vacuum sealing on the sensory characteristics, such as the color, texture, taste, aroma of fresh fruit, and general acceptance, of the strawberries and tomatoes samples were evaluated by 12 panelists. Figures and illustrate the average scores reported by the panelists for the strawberry and tomato samples throughout the full experimental period. As shown in the figures, there are notable differences in the sensory traits and the shelf life of the samples based on the storage condition, where the positive effects of the UV‐C irradiation and the anaerobic storage are noteworthy. Although the shelf life of the UV‐C‐irradiated samples is longer than the shelf life of the control samples, the sensory evaluation scores of the UV‐C samples at the end of the shelf life are less than that of control samples. This is attributed to the effect of the aerobic storage environment that results in higher weight loss percentages (Figure ), dehydration, and an overall quality decay throughout the long storage period. In contrast, the scores of the Vacuum samples are within the excellent limits, despite a slight decrease in the texture and taste scores at the end of the shelf life. It has been observed that UV‐C & Vacuum samples received a higher acceptance rate in comparison to the other storage conditions. The sensory evaluation scores (Figures and ) and the overall acceptability (Figure ) of the UV‐C & Vacuum samples are greater than those of the UV‐C samples, Vacuum samples, and Control samples, despite a minor decline in taste and texture scores by the end of the shelf life. To visualize the overall effectiveness of each storage condition in extending the shelf life of strawberries and quartered tomatoes, the average shelf‐life values of the three experimental runs are plotted in Figure . As shown in the figure, the maximum shelf life was achieved in the UV‐C and vacuum storage condition, where the average shelf life exceeded the normal (control) storage condition by 124.41% for strawberries and by 54.41% for quartered tomatoes. Although the shelf life of the samples varied between the iterations due to the potential differences in the growing practices, harvest times, handling, and transportation, the standard deviation between the iterations is fairly low (<1.63), which indicates the possibility of using the average shelf‐life values of quartered tomatoes and whole strawberries as a reference. Since the UV‐C & Vacuum samples attained high sensory evaluation scores despite the long storage period, this storage method can be applicable in the real‐world settings and would have a significant impact on the economy and the environment. However, further study is required to examine the unit economics, usage scenarios, and practical designs.
Weight loss and sensory characteristics evaluation The weight loss is an indication of dehydration, decomposition, and overall quality decay, which directly contribute to the shelf‐life deterioration of fresh produce. All experimental samples in this study decayed throughout the storage period as shown in Figure ; however, the weight loss percentages were influenced by the storage condition. As shown in the figure, the samples that were stored in the UV‐C and vacuum condition lost less weight in comparison to other samples that were either irradiated with UV‐C light only or only vacuum sealed in all the experimental iterations. The vacuum storage condition came second in terms of decay, followed by the UV‐C condition, while the control samples experienced a relatively larger decay. The weight loss is associated with a higher polyphenol oxidase (PPO) enzyme reaction, which stimulates the oxidation of phenolic compounds in fresh produce and consequently affects the firmness and color of the produce (Xu et al., ; Yoon et al., ). The low oxygen levels in the vacuumed samples limited the PPO activity and resulted in reduced weight loss. For the UV‐C‐irradiated samples, the weight loss reduction is attributed to the effect of the UV‐C irradiation on the inactivation of microorganisms, as limiting their growth reduces damage to the walls and tissues, maintains the firmness, and limits the weight reduction of the produce (Hosseini et al., ). In general, the weight loss in fresh produce is a result of the transpiration (moisture loss) and respiration (carbon loss) processes. Hence, prolonging the shelf life of perishables can be achieved by limiting the transpiration and respiration rates, which are associated with various factors including storage temperature and humidity levels, postharvest handling processes, and transportation conditions. However, since moisture passes through the skin of the fruits and vegetables before it evaporates, the transpiration rate is directly proportional to skin permeability. Skin permeability is a function of the skin mass transfer coefficient—an indicator of the skin's porosity level, which defines its resistance to the moisture passage (Becker & Fricke, ). Chau et al. ( ) and Gan and Woods ( ) have experimentally defined the skin mass transfer coefficients of various fresh produce: the mean coefficients of strawberries and tomatoes are 13.6 and 10.1, respectively, which justifies the higher weight loss values of the strawberry samples in comparison to the tomato samples in the present study.
Microbial analysis The microbial populations were quantified for all the strawberry and tomato samples on different days to study the effect of each storage condition on the microbial growth rate, which influences the alteration of the sensory characteristics of the samples. The population growth of Pseudomonas sp. and yeast and mold is shown in Figure for the strawberry samples and in Figure for the tomato samples. The growth of Pseudomonas sp. was very slow in the first days for all samples, but it abruptly increased during the last few days. It was observed that the growth rates of Pseudomonas sp. were inflected by the storage environment and the chemistry of each fruit. For instance, it was noticed that the growth of Pseudomonas sp. in the strawberry samples that were stored in aerobic environments (control and UV‐C) is higher than the growth in the samples stored in anaerobic environments (Vacuum and UV‐C & Vacuum) as shown in Figure . In contrast, the Pseudomonas growth in the quartered tomato samples was higher in the anaerobic storage environments and lower in the aerobic storage environments (Vacuum and UV‐C & Vacuum) as shown in Figure . This difference in the growth rates between the storage conditions is attributed to the characteristics of Pseudomonas . The majority of Pseudomonas sp. are obligate aerobes, where oxygen is used as a terminal electron acceptor (Robinson, ). Yet, some Pseudomonas sp. species can grow anaerobically upon the availability of nitrite that can be also used as a terminal electron acceptor (Robinson, ; Schaechter, ). In fresh tomato fruits, the nitrate levels range between 0.93 and 66.54 with an average of 12.55 ± 0.002 (mg/kg FW ± SE) (MirMohammad‐Makki & Ziarati, ). Therefore, the high growth of Pseudomonas sp. in the tomato samples that were stored in anaerobic environments was stimulated by the high nitrite content in the fruit itself. As for the yeast and mold populations, a reduction in the growth rates was observed in all the tomato and strawberry samples that were irradiated using UV‐C light, whereas the slowest growth rate was observed in the UV‐C and vacuum storage condition, as presented in Figures and . A different growth behavior was observed in the strawberry samples of run 3, where the UV‐C‐irritated sample expired before the vacuumed sample, which could be attributed to the potential difference in the initial microbial content of each sample. The growth rates of yeast and mold, and Pseudomonas sp. were compared to the changes in the sensory evaluation of the samples during their shelf life. Noticeable increases in the microbial populations were observed on the days when organoleptic spoilage was identified as shown in Figures and . The population of yeast and mold that indicates the spoilage of strawberries and tomatoes ranged from 21 × 10 4 to 25 × 10 4 log CFU/g and from 21 × 10 4 to 24 × 10 4 log CFU/g, respectively. Since these microbial population levels were reached on later days for all the UV‐C and vacuum samples, this storage condition proved its positive effect on maximizing the shelf life of strawberries and tomatoes in comparison to the other storage conditions. This positive effect is mainly due to the combined effect of the (1) UV‐C irradiation, which damages the DNA of microorganisms and affects their metabolism and reproduction, ultimately resulting in cell death (Brem et al., ; Gayán et al., ), in addition to the (2) vacuum sealing that limits the oxygen level necessary for the metabolism and growth of microorganisms (Shajil et al., ). The oxygen levels in the storage containers after the vacuum sealing range between 1.7% and 1.8%, while the oxygen levels in the unvacuumed containers range between 20.7% and 21%. Since the oxygen level plays a pivotal role in microbial growth, it is important to study the packaging parameters that can influence the oxygen level, such as the Oxygen Transmission Rate (OTR), the Carbon Dioxide Transmission Rate (CTR), the Water Vapor Transmission Rate (WVTR), and the oxygen and water permeabilities. Hence, our future research can focus on studying the transmission rates, the permeabilities, and their correlation with the shelf life of food. In general, eliminating oxygen from the containers creates oxygen and vapor barriers, which impede the undesirable oxidative reactions in the food during storage. Yet, the anaerobic storage condition is a favorable environment for some bacteria, such as Clostridium botulinum , which thrives in the low‐oxygen conditions and produces harmful toxins. But since the ideal temperature range for Clostridium botulinum growth is 20–37°C, its growth will be limited as the samples were stored at 4°C (Fields et al., ; Tanner & Oglesby, ). Aside from the yeast and mold counts, the populations of aerobic bacteria, lactic acid bacteria, and Enterobacteriaceae were examined and found to be too few to count in all the tomato and strawberry samples. This low count is due to the natural acidity (low pH) of tomatoes and strawberries, which creates an unfavorable environment for the growth of many spoilage microorganisms, especially bacteria (Barth et al., ). However, the low pH environment is suitable for the growth of yeast and mold (Petruzzi et al., ). Therefore, the increase in the yeast and mold colonies has been inferred to be the main cause of the degradation in the sensory characteristics of strawberries and tomatoes.
PH measurements The pH levels of tomato and strawberry samples increased during the storage period, where the slowest increase rate was observed in the UV‐C & Vacuum samples as shown in Figure . The pH levels in the UV‐C and vacuum samples of strawberry ranged between 0.25 and 0.29 as shown in Figure , while pH levels of the tomato samples ranged between 0.20 and 0.24 on the last days as indicated in Figure . On the other side, the fastest rate of pH increase was observed in the control samples, which implies the reduction of acidity level in the fruits (Mgaya‐Kilima et al., ). These results are on agreement with other research that showed as increase in the pH levels throughout the storage period of strawberry and tomatoes (Caner et al., ; García et al., ).
Organoleptic examination Spoilage is indicated by a variety of sensory cues, such as off‐colors, off‐odors, and the softening of vegetables and fruits. For the strawberry samples, the spoilage was determined based on the color change to dark red, the appearance of soft brown spots, wilting, seed disappearance (softness), and the appearance of white fungal growth. The spoilage was observed between day 8 and 10 for the control samples, between day 13 and 15 for the vacuum‐sealed samples, and between day 13 and 17 for the UV‐C‐irradiated samples. The samples that were vacuum sealed and irradiated with UV‐C light remained intact for a longer period of time and spoiled between day 19 and 20, as illustrated in Figure . On the other hand, the spoilage of the quartered tomato samples was determined based on the appearance of black spots, small green and white mold growth, a mushy texture, fluid leakage, or a foul odor. Generally, the odor of UV‐C & Vacuum samples was the strongest followed by the Vacuum samples. The color alteration was slight in the UV‐C & Vacuum samples and Vacuum samples, while it was more noticed in UV‐C samples. These changes were observed in the control samples between days 13 and 15, in the vacuum‐sealed sample between days 15 and 17, and in the UV‐C‐irradiated samples between day 17 and 20. The UV‐C and vacuum samples remained intact for a longer time; small black spots and fluid leakage were noticed between day 20 and 23, as indicated in Figure . The shelf‐life results that were determined through the organoleptic analysis align with those determined via the microbial population quantification of yeast and mold and Pseudomonas sp. that are illustrated in Figures and .
Sensory evaluation The consumer acceptance of perishables is greatly influenced by the sensory qualities. The effects of UV‐C irradiation and vacuum sealing on the sensory characteristics, such as the color, texture, taste, aroma of fresh fruit, and general acceptance, of the strawberries and tomatoes samples were evaluated by 12 panelists. Figures and illustrate the average scores reported by the panelists for the strawberry and tomato samples throughout the full experimental period. As shown in the figures, there are notable differences in the sensory traits and the shelf life of the samples based on the storage condition, where the positive effects of the UV‐C irradiation and the anaerobic storage are noteworthy. Although the shelf life of the UV‐C‐irradiated samples is longer than the shelf life of the control samples, the sensory evaluation scores of the UV‐C samples at the end of the shelf life are less than that of control samples. This is attributed to the effect of the aerobic storage environment that results in higher weight loss percentages (Figure ), dehydration, and an overall quality decay throughout the long storage period. In contrast, the scores of the Vacuum samples are within the excellent limits, despite a slight decrease in the texture and taste scores at the end of the shelf life. It has been observed that UV‐C & Vacuum samples received a higher acceptance rate in comparison to the other storage conditions. The sensory evaluation scores (Figures and ) and the overall acceptability (Figure ) of the UV‐C & Vacuum samples are greater than those of the UV‐C samples, Vacuum samples, and Control samples, despite a minor decline in taste and texture scores by the end of the shelf life. To visualize the overall effectiveness of each storage condition in extending the shelf life of strawberries and quartered tomatoes, the average shelf‐life values of the three experimental runs are plotted in Figure . As shown in the figure, the maximum shelf life was achieved in the UV‐C and vacuum storage condition, where the average shelf life exceeded the normal (control) storage condition by 124.41% for strawberries and by 54.41% for quartered tomatoes. Although the shelf life of the samples varied between the iterations due to the potential differences in the growing practices, harvest times, handling, and transportation, the standard deviation between the iterations is fairly low (<1.63), which indicates the possibility of using the average shelf‐life values of quartered tomatoes and whole strawberries as a reference. Since the UV‐C & Vacuum samples attained high sensory evaluation scores despite the long storage period, this storage method can be applicable in the real‐world settings and would have a significant impact on the economy and the environment. However, further study is required to examine the unit economics, usage scenarios, and practical designs.
CONCLUSION The effectiveness of the combination of UV‐C irradiation and vacuum sealing in extending the shelf life of whole strawberries and quartered tomatoes stored at 4°C was examined and compared with the effect on shelf life under the normal storage condition, the sole UV‐C irradiation condition, and the sole vacuum sealing storage. The shelf life and quality of the samples were evaluated through organoleptic quality examination, weight loss measurement, pH analysis, and the microbial population quantification of yeast and mold and Pseudomonas sp. The combination of UV‐C irradiation and vacuum sealing increased the average shelf life of the strawberries and tomatoes by 124.41% and 54.41%, respectively. The results suggest that this storage condition is more effective for fresh produce preservation than the sole use of UV‐C irradiation or vacuum sealing, and it could significantly reduce the spoilage rate of fresh produce.
Asrar Damdam : Conceptualization; Investigation; Funding
acquisition; Writing – original draft; Formal analysis; Writing – review & editing; Methodology; Visualization; Data curation. Ashwaq Al‐Zahrani : Methodology; Validation; Writing – original draft; Writing – review & editing; Investigation. Lama Salah : Data curation; Validation; Visualization; Writing – review & editing; Methodology. Khaled Nabil Salama : Funding acquisition; Writing – review & editing; Supervision; Resources.
The authors declare no conflict of interest.
|
Integrating static and modifiable risk factors in violence risk assessment for forensic psychiatric patients: a feasibility study of FoVOx
|
e6c33039-c4ac-4ece-8653-8527d36d409a
|
10108825
|
Forensic Medicine[mh]
|
Risk assessment is an integral part of forensic psychiatric practice. The process of gatekeeping new patients into a secure hospital setting, or readmitting them, is generally dependent on an assessment of the seriousness of their risks to others . Many interventions in secure psychiatric hospital focus on decreasing the likelihood of causing future harm, and a reduction in risk is central to discharge planning, often as a criterion under mental health law or related legislation. Despite the importance of accurate risk assessment, there are challenges in how it is currently conducted. In clinical settings, structured professional judgement tools are generally preferred to actuarial assessments of risk. Although there are benefits, in that individual risk formulation can be constructed, drawbacks have been noted by experts . These include poor field validity , containing items which are not predictive leading to redundancy and waste, and their implementation and use in populations different to those in which they were developed. Further, they are often time consuming to complete, and do not provide an easily interpretable quantified assessment of risk . Therefore, there has been increasing discussion in the use of evidence-based actuarial tools developed specifically for forensic psychiatric populations, and that are scalable, transparently developed, and validated. These can improve the accuracy of risk assessment without adding significantly to the burden on staff. Doing so may also increase the time available for risk management and violence prevention, rather than purely focusing on assessment. One such tool, the Forensic Psychiatry and Violence tool Oxford (FoVOx), has demonstrated good performance in terms of discrimination (a tool’s ability to distinguish between those who have the outcome of interest and those who do not by assigning risk score or category to those with the outcome) with a reported AUC of 0.77, and sensitivity (true positive rate) of 55% and specificity (true negative rate) of 83% using a 20% probability score as the cut off for elevated risk of violent reoffending . The FoVOx tool has also demonstrated good calibration (how well the tool’s predicted risk matches with the actual observed risk), which has not been reported in previous risk assessment instruments but a key performance metric . The FoVOx tool was developed using multivariate models and, unlike other tools, was based on an adequate sample size for tool development. FoVOx has, furthermore, also been internally validated, and the coefficients and formula for its output have been published. In this investigation, we aimed to investigate the FoVOx tool in a Swedish forensic psychiatric setting, with a focus on feasibility and pilot validation data. This is the first such study in a Nordic country. Secondary aims were to examine how it could implemented and developed, including monitoring risk.
Study design and participants We used a mixed-method approach to investigate the feasibility of FoVOx and examined data on its predictive performance by: (i) identifying discharged forensic psychiatric patients in the Swedish National Forensic Psychiatric Register (RättspsyK); (ii) scoring their risk using the FoVOx tool; (iii) qualitatively assessing the tool by interviews with the clinicians in charge at discharge and; (iv) conducting a pilot investigation of the recidivism rates in the patient cohort based on their FoVOx score. Setting The Swedish National Forensic Psychiatric Register (RättspsyK) is a national quality register which has collected a range of socio-demographic, criminal history and clinical data on patients sentenced to forensic psychiatric care since 2008. Twenty-four (out of 25) Swedish forensic psychiatric units annually report to this register with a current national patient coverage of around 85%. In addition to basic patient information (age, sex, geography, date of admission and discharge), the register contains and annually collects data on 25 indicators, including ICD-based psychiatric and somatic diagnoses, types of treatment, level of care and accommodation. The Swedish National Crime Register provides data on all crime convictions in Sweden in individuals aged 15 and over (the age of criminal responsibility) since 1973. Patients All patients registered in RättspsyK and discharged from forensic psychiatric care in Stockholm County to the Swedish community between 1 January 2012, and 31 December 2017, were identified and included in the study cohort. Clinicians All lead clinicians (consultant level or equivalent) for the patient cohort at the time of discharge were identified and contacted for interview. This comprised seven women and seven men, all specialist psychiatrists but not all sub-specialized in forensic psychiatry. Measures FoVOx Information to calculate each included patient’s FoVOx score was extracted from RättspsyK. FoVOx is an online violence risk assessment tool that consists of twelve items, including socio-demographic, criminal history, and clinical factors, which are mostly categorized dichotomously. When there was missing data, such as status of employment prior to conviction, as in previous work , necessary information was reliably completed from available health records. The Swedish translated online version FoVOx (available at https://oxrisk.com/fovox-7/ ) was used to calculate risk scores (a probability of violent offending at 1 and 2 years after discharge that ranges from 0 to 60%, with the highest score set at a ceiling of >60%) and to present FoVOx to clinicians during interviews. Questionnaire A Swedish version of a previously developed semi-structured feasibility questionnaire was used to interview clinicians ( Supplementary Appendix 1–2 ). Each clinician went through an in-depth interview with a combination of predetermined options and open-ended questions regarding each of their assessments prior to discharge. So that clinicians could familiarise themselves with their patient prior to the interview, they were asked to read an extract of their own previous psychiatric report for the court (which is completed every 6 months in Sweden). The standardized questionnaire contained no patient identifiable information. As part of this interview, the clinician was asked to estimate at discharge the two-year risk of a violent conviction in terms of the pre-specified FoVOx categories (Low <5%; Medium 5–20%; High >20%). In instances of a given overlapping risk range (e.g. low-medium, or medium-high), the highest risk was recorded. The clinician was then asked if they knew whether the patient had committed a violent offence since discharge. After this, the clinician was informed of the calculated FoVOx risk assessment score and risk category of their patient at discharge. The clinician’s view and reasoning, as well as thoughts of FoVOx potential use at previous discharge, were then recorded. In each instance, the clinician was asked to provide reasons of why FoVOx would or would not have altered the previous clinical management. Lastly, a verbal summary of the collected information was given at the end of each interview for the clinician to confirm or specify further. The records of the open-ended questions were individually analysed and thematically organized by two interviewers, who are both specialist psychiatrists (JF, HB). In a follow-up consensus meeting, principal themes were finally identified and decided in accordance with previous work and newly found categories. Pilot validation of FoVOx Each included patient was identified in the National Crime Registry regarding sentenced violent crime convictions in Sweden in accordance with previous definitions . Specified dates of when the crimes were committed were used to calculate time periods from discharge to violent re-offence. A cut-off of 730 days was used to validate the performance of FoVOx two-year risk prediction post discharge. Ethics The research ethics committee in Stockholm, Sweden approved the research project (reference number 2019-04048). To identify patients, existing data on discharges in the RättspsyK was used. No patient data beyond what had been collected through routine clinical care or previous informed consent as part of inclusion in the RättspsyK was used. Management of patients or registry data was not impacted by the study. All interviewed clinicians participated in the study voluntarily under informed consent, and patient data was anonymized other than for the ‘unblinding’ during the interviews.
We used a mixed-method approach to investigate the feasibility of FoVOx and examined data on its predictive performance by: (i) identifying discharged forensic psychiatric patients in the Swedish National Forensic Psychiatric Register (RättspsyK); (ii) scoring their risk using the FoVOx tool; (iii) qualitatively assessing the tool by interviews with the clinicians in charge at discharge and; (iv) conducting a pilot investigation of the recidivism rates in the patient cohort based on their FoVOx score.
The Swedish National Forensic Psychiatric Register (RättspsyK) is a national quality register which has collected a range of socio-demographic, criminal history and clinical data on patients sentenced to forensic psychiatric care since 2008. Twenty-four (out of 25) Swedish forensic psychiatric units annually report to this register with a current national patient coverage of around 85%. In addition to basic patient information (age, sex, geography, date of admission and discharge), the register contains and annually collects data on 25 indicators, including ICD-based psychiatric and somatic diagnoses, types of treatment, level of care and accommodation. The Swedish National Crime Register provides data on all crime convictions in Sweden in individuals aged 15 and over (the age of criminal responsibility) since 1973.
All patients registered in RättspsyK and discharged from forensic psychiatric care in Stockholm County to the Swedish community between 1 January 2012, and 31 December 2017, were identified and included in the study cohort.
All lead clinicians (consultant level or equivalent) for the patient cohort at the time of discharge were identified and contacted for interview. This comprised seven women and seven men, all specialist psychiatrists but not all sub-specialized in forensic psychiatry.
FoVOx Information to calculate each included patient’s FoVOx score was extracted from RättspsyK. FoVOx is an online violence risk assessment tool that consists of twelve items, including socio-demographic, criminal history, and clinical factors, which are mostly categorized dichotomously. When there was missing data, such as status of employment prior to conviction, as in previous work , necessary information was reliably completed from available health records. The Swedish translated online version FoVOx (available at https://oxrisk.com/fovox-7/ ) was used to calculate risk scores (a probability of violent offending at 1 and 2 years after discharge that ranges from 0 to 60%, with the highest score set at a ceiling of >60%) and to present FoVOx to clinicians during interviews. Questionnaire A Swedish version of a previously developed semi-structured feasibility questionnaire was used to interview clinicians ( Supplementary Appendix 1–2 ). Each clinician went through an in-depth interview with a combination of predetermined options and open-ended questions regarding each of their assessments prior to discharge. So that clinicians could familiarise themselves with their patient prior to the interview, they were asked to read an extract of their own previous psychiatric report for the court (which is completed every 6 months in Sweden). The standardized questionnaire contained no patient identifiable information. As part of this interview, the clinician was asked to estimate at discharge the two-year risk of a violent conviction in terms of the pre-specified FoVOx categories (Low <5%; Medium 5–20%; High >20%). In instances of a given overlapping risk range (e.g. low-medium, or medium-high), the highest risk was recorded. The clinician was then asked if they knew whether the patient had committed a violent offence since discharge. After this, the clinician was informed of the calculated FoVOx risk assessment score and risk category of their patient at discharge. The clinician’s view and reasoning, as well as thoughts of FoVOx potential use at previous discharge, were then recorded. In each instance, the clinician was asked to provide reasons of why FoVOx would or would not have altered the previous clinical management. Lastly, a verbal summary of the collected information was given at the end of each interview for the clinician to confirm or specify further. The records of the open-ended questions were individually analysed and thematically organized by two interviewers, who are both specialist psychiatrists (JF, HB). In a follow-up consensus meeting, principal themes were finally identified and decided in accordance with previous work and newly found categories.
Information to calculate each included patient’s FoVOx score was extracted from RättspsyK. FoVOx is an online violence risk assessment tool that consists of twelve items, including socio-demographic, criminal history, and clinical factors, which are mostly categorized dichotomously. When there was missing data, such as status of employment prior to conviction, as in previous work , necessary information was reliably completed from available health records. The Swedish translated online version FoVOx (available at https://oxrisk.com/fovox-7/ ) was used to calculate risk scores (a probability of violent offending at 1 and 2 years after discharge that ranges from 0 to 60%, with the highest score set at a ceiling of >60%) and to present FoVOx to clinicians during interviews.
A Swedish version of a previously developed semi-structured feasibility questionnaire was used to interview clinicians ( Supplementary Appendix 1–2 ). Each clinician went through an in-depth interview with a combination of predetermined options and open-ended questions regarding each of their assessments prior to discharge. So that clinicians could familiarise themselves with their patient prior to the interview, they were asked to read an extract of their own previous psychiatric report for the court (which is completed every 6 months in Sweden). The standardized questionnaire contained no patient identifiable information. As part of this interview, the clinician was asked to estimate at discharge the two-year risk of a violent conviction in terms of the pre-specified FoVOx categories (Low <5%; Medium 5–20%; High >20%). In instances of a given overlapping risk range (e.g. low-medium, or medium-high), the highest risk was recorded. The clinician was then asked if they knew whether the patient had committed a violent offence since discharge. After this, the clinician was informed of the calculated FoVOx risk assessment score and risk category of their patient at discharge. The clinician’s view and reasoning, as well as thoughts of FoVOx potential use at previous discharge, were then recorded. In each instance, the clinician was asked to provide reasons of why FoVOx would or would not have altered the previous clinical management. Lastly, a verbal summary of the collected information was given at the end of each interview for the clinician to confirm or specify further. The records of the open-ended questions were individually analysed and thematically organized by two interviewers, who are both specialist psychiatrists (JF, HB). In a follow-up consensus meeting, principal themes were finally identified and decided in accordance with previous work and newly found categories.
Each included patient was identified in the National Crime Registry regarding sentenced violent crime convictions in Sweden in accordance with previous definitions . Specified dates of when the crimes were committed were used to calculate time periods from discharge to violent re-offence. A cut-off of 730 days was used to validate the performance of FoVOx two-year risk prediction post discharge.
The research ethics committee in Stockholm, Sweden approved the research project (reference number 2019-04048). To identify patients, existing data on discharges in the RättspsyK was used. No patient data beyond what had been collected through routine clinical care or previous informed consent as part of inclusion in the RättspsyK was used. Management of patients or registry data was not impacted by the study. All interviewed clinicians participated in the study voluntarily under informed consent, and patient data was anonymized other than for the ‘unblinding’ during the interviews.
Sample A total of 197 discharges from forensic psychiatric care in Stockholm County were identified from 1 January 2012 to 31 December 2017. Ninety-five patients were not included in the follow-up (15 were registered as having died from any cause and 80 patients had been transferred to another country, secure unit, or other forensic psychiatric hospital). An additional seven patients were excluded due to loss to follow-up (as two clinicians in charge of their care that did not participate for interviews out of a total of 14 consultant psychiatrists in charge). Therefore, 95 patients discharged to the community in Sweden were included in the study ( ). Of these, 15 (16%) were female and the median age was 46 (range 21–82). The number of assessed patients per clinician ranged from 1 to 25 and the median time from discharge to study interview was 2141 days (interquartile range 1788–2602 days). Eight out of 12 (67%) clinicians reported the use of a structured risk assessment tool in addition to clinical interviews at the time of discharge. These were the Short-Term Assessment of Risk and treatability (START) ( n = 2) or HCR-20 ( n = 3), or a combination of both ( n = 2). One clinician reported the use of Violence Risk Appraisal Guide (VRAG). Baseline characteristics Sample characteristics and distribution of FoVOx-specific risk factors are presented in . Of the sample, 89 (94%) had previously been sentenced for a violent crime, 88 (93%) had over one year of current inpatient stay, and 91 (91%) had at the time of their detention been unemployed for at least six months. 44 (46%) of the cohort had previously committed a serious violent crime and 39 (41%) had a history of drug abuse. The most common primary diagnosis at discharge was schizophrenia spectrum disorder ( n = 46, 48%). In those that had new violent convictions after discharge, median age at discharge was 36. In comparison to the full study sample, those who had violently reoffended were less likely to be male or have schizophrenia spectrum disorder and multiple previous inpatient episodes. All other risk factors were more common among those committing violent crimes after discharge. FoVOx scores FoVOx scores were calculated prior to the interviews for each patient from RättspsyK data and clinical records. The median FoVOx probability score for violent reoffending within two years was 7% (range 0% to 40%) for the overall sample. Regarding FoVOx pre-specified risk categories, 28 (30%) were estimated to be low risk, 60 (63%) medium risk, and 7 (7%) high risk. Recidivism Of the 95 discharges, 9 (9%) were reported to have committed further violent offences based on the information from the clinician in charge. Of these, five patients had FoVOx scores in the medium category and one in the high category. Two of these, and four other patients ( n = 6, 6%) were identified in the crime register to have been convicted for new violent crimes within two years after discharge. Among the convicted violent recidivists, five were categorized as medium or high risk. Concordance between FoVOx scores and clinical judgment Dichotomizing the risk assessment ( low versus medium/high ), the agreement between clinician and FoVOx scores was 47% (42 out of 90, kappa = 0.09 [95% CI, −0.05–0.24].) The clinician’s versus FoVOx risk ratings are presented in . In most cases ( n = 60, 63%), the clinician in charge considered the FoVOx risk assessment to be an accurate representation of the actual risk of violence at discharge. In 24 (25%) instances, clinicians did not think FoVOx accurately reflected this risk. Identified reasons as to why FoVOx was not an accurate representation of the risk were mostly based on the relative proportion of modifiable (dynamic) and static factors in the tool, and whether FoVOx was considered to overestimate or underestimate risk ( .) Missing modifiable factors in either direction were thought to be ‘ level of insight’ and ‘ recurrent and compulsive thoughts ’. Some protective factors that clinicians considered that a high FoVOx did not consider were: ‘ an uncomplicated patient ’; ‘ stability and progress of given care ’; and ‘ well-coordinated social support measures ’. Other relevant factors were: ‘ relapse of substance abuse’ ; ‘ impulsivity ’; ‘ oddness of index crime’ ; and ‘ adherence to medication ’. Among static (non-modifiable) factors that were considered missing when FoVOx was considered to overestimate risk were ‘ severe somatic illness ’, ‘misjudged primary diagnosis at discharge ’ and ‘ honor-related violence ’. Low risk FoVOx assessments were in a few instances thought to miss possible static risk factors such as ‘ dementia’, ‘psychopathy/manipulative behavior’ , and ‘ autism’ . ‘ Level of accommodation ’ was repeatedly mentioned as both a static risk factor and as a protective factor against future violence. Viewpoints on utility at the point of discharge All the interviewed clinicians expressed that FoVOx would have been of clinical benefit at the time of discharge. Additionally, in 20 (20%) discharges, clinicians thought that the instrument would have materially altered their assessment and management. The qualitative feedback is summarized in . In instances when FoVOx was helpful, clinicians stated that it: ‘ corresponded and supported our clinical judgment ’; ‘ would have added an additional objective argument ’; ‘ the results would have been easy to communicate with the court, community, and patient ’; and ‘ would have highlighted the overall risk in a more specific way than just the overall clinical judgment ’. Comments that considered FoVOx not helpful were: ‘assessment would have been based on other factors, including modifiable factors’ and ‘ a general clinical impression is of greater value than specific risk points’. Overall views of practicality and future use In terms of practical use, all clinicians found the FoVOx-web-based tool to be practical, and the majority ( n = 8, 67%) reported that the tool could be completed without referring to clinical notes. Nine clinicians (or 75%) planned to use FoVOx in the future, whereas two clinicians were unable to say, and one would not use the tool referencing current work with non-forensic psychiatric patients. Reasons against the use of FoVOx were: ‘ not suitable for every patient ’ and ‘ it might give a false risk assessment when specific variables are not covered ’. Common reasons for future use included that FoVOx is: ‘ possible to use both in regard to termination and continued care ’; ‘ it’s made simple to compare risk factors ’; ‘ it will be very useful for junior colleagues and other specialties ’; and ‘ it is very relevant , easy to use, well-structured and time-efficient. ’
A total of 197 discharges from forensic psychiatric care in Stockholm County were identified from 1 January 2012 to 31 December 2017. Ninety-five patients were not included in the follow-up (15 were registered as having died from any cause and 80 patients had been transferred to another country, secure unit, or other forensic psychiatric hospital). An additional seven patients were excluded due to loss to follow-up (as two clinicians in charge of their care that did not participate for interviews out of a total of 14 consultant psychiatrists in charge). Therefore, 95 patients discharged to the community in Sweden were included in the study ( ). Of these, 15 (16%) were female and the median age was 46 (range 21–82). The number of assessed patients per clinician ranged from 1 to 25 and the median time from discharge to study interview was 2141 days (interquartile range 1788–2602 days). Eight out of 12 (67%) clinicians reported the use of a structured risk assessment tool in addition to clinical interviews at the time of discharge. These were the Short-Term Assessment of Risk and treatability (START) ( n = 2) or HCR-20 ( n = 3), or a combination of both ( n = 2). One clinician reported the use of Violence Risk Appraisal Guide (VRAG).
Sample characteristics and distribution of FoVOx-specific risk factors are presented in . Of the sample, 89 (94%) had previously been sentenced for a violent crime, 88 (93%) had over one year of current inpatient stay, and 91 (91%) had at the time of their detention been unemployed for at least six months. 44 (46%) of the cohort had previously committed a serious violent crime and 39 (41%) had a history of drug abuse. The most common primary diagnosis at discharge was schizophrenia spectrum disorder ( n = 46, 48%). In those that had new violent convictions after discharge, median age at discharge was 36. In comparison to the full study sample, those who had violently reoffended were less likely to be male or have schizophrenia spectrum disorder and multiple previous inpatient episodes. All other risk factors were more common among those committing violent crimes after discharge.
FoVOx scores were calculated prior to the interviews for each patient from RättspsyK data and clinical records. The median FoVOx probability score for violent reoffending within two years was 7% (range 0% to 40%) for the overall sample. Regarding FoVOx pre-specified risk categories, 28 (30%) were estimated to be low risk, 60 (63%) medium risk, and 7 (7%) high risk.
Of the 95 discharges, 9 (9%) were reported to have committed further violent offences based on the information from the clinician in charge. Of these, five patients had FoVOx scores in the medium category and one in the high category. Two of these, and four other patients ( n = 6, 6%) were identified in the crime register to have been convicted for new violent crimes within two years after discharge. Among the convicted violent recidivists, five were categorized as medium or high risk.
Dichotomizing the risk assessment ( low versus medium/high ), the agreement between clinician and FoVOx scores was 47% (42 out of 90, kappa = 0.09 [95% CI, −0.05–0.24].) The clinician’s versus FoVOx risk ratings are presented in . In most cases ( n = 60, 63%), the clinician in charge considered the FoVOx risk assessment to be an accurate representation of the actual risk of violence at discharge. In 24 (25%) instances, clinicians did not think FoVOx accurately reflected this risk. Identified reasons as to why FoVOx was not an accurate representation of the risk were mostly based on the relative proportion of modifiable (dynamic) and static factors in the tool, and whether FoVOx was considered to overestimate or underestimate risk ( .) Missing modifiable factors in either direction were thought to be ‘ level of insight’ and ‘ recurrent and compulsive thoughts ’. Some protective factors that clinicians considered that a high FoVOx did not consider were: ‘ an uncomplicated patient ’; ‘ stability and progress of given care ’; and ‘ well-coordinated social support measures ’. Other relevant factors were: ‘ relapse of substance abuse’ ; ‘ impulsivity ’; ‘ oddness of index crime’ ; and ‘ adherence to medication ’. Among static (non-modifiable) factors that were considered missing when FoVOx was considered to overestimate risk were ‘ severe somatic illness ’, ‘misjudged primary diagnosis at discharge ’ and ‘ honor-related violence ’. Low risk FoVOx assessments were in a few instances thought to miss possible static risk factors such as ‘ dementia’, ‘psychopathy/manipulative behavior’ , and ‘ autism’ . ‘ Level of accommodation ’ was repeatedly mentioned as both a static risk factor and as a protective factor against future violence.
All the interviewed clinicians expressed that FoVOx would have been of clinical benefit at the time of discharge. Additionally, in 20 (20%) discharges, clinicians thought that the instrument would have materially altered their assessment and management. The qualitative feedback is summarized in . In instances when FoVOx was helpful, clinicians stated that it: ‘ corresponded and supported our clinical judgment ’; ‘ would have added an additional objective argument ’; ‘ the results would have been easy to communicate with the court, community, and patient ’; and ‘ would have highlighted the overall risk in a more specific way than just the overall clinical judgment ’. Comments that considered FoVOx not helpful were: ‘assessment would have been based on other factors, including modifiable factors’ and ‘ a general clinical impression is of greater value than specific risk points’.
In terms of practical use, all clinicians found the FoVOx-web-based tool to be practical, and the majority ( n = 8, 67%) reported that the tool could be completed without referring to clinical notes. Nine clinicians (or 75%) planned to use FoVOx in the future, whereas two clinicians were unable to say, and one would not use the tool referencing current work with non-forensic psychiatric patients. Reasons against the use of FoVOx were: ‘ not suitable for every patient ’ and ‘ it might give a false risk assessment when specific variables are not covered ’. Common reasons for future use included that FoVOx is: ‘ possible to use both in regard to termination and continued care ’; ‘ it’s made simple to compare risk factors ’; ‘ it will be very useful for junior colleagues and other specialties ’; and ‘ it is very relevant , easy to use, well-structured and time-efficient. ’
From the Swedish National Forensic Psychiatric Register (or RättspsyK), we identified and completed individual FoVOx risk assessments on 95 discharged forensic psychiatric patients in Stockholm County. We then interviewed 12 specialist psychiatrists who were lead clinicians at the time of patient discharge. These interviews assessed previous risk assessments by these clinicians, useability and usefulness of FoVOx, perceived accuracy, and potential improvements. Lastly, we investigated the sample’s probability scores of violent offending after discharge from hospital based on the tool and compared these with officially recorded convicted violent crimes as part of a pilot external validation. In keeping with previous studies , clinicians found the FoVOx tool easy and practical to use, as well as reliable. Despite mixed concordance between FoVOx probability scores and the clinical judgments at time of discharge, most clinicians nevertheless considered that FoVOx presented an accurate representation of the risk of violent reoffending. The calculated median risk (7%) of violent reoffending within two years post discharge was consistent with officially recorded convictions for violent crimes (6%) over two years, but lower than the median risk (11%) of the target population (all discharged forensic psychiatric patients in Sweden during 1992 to 2013) from which FoVOx was developed. The extent to which the tool captured the unexplained variance of violence reoffending was not directly tested but the Brier score, which is a measure of calibration or the extent of the correspondence between expected and observed outcome rates, provides one approach and was tested in the FoVOx development sample. The Brier score can range between 0 and 1 and quantifies the accuracy of a tool's risk prediction by averaging the squared differences between the predicted and observed outcome probabilities . Based on the internal validation, the tool performed very well for the two main outcomes at 24 months (Brier score 0.09) and 12 months (0.06), where 0 would be a perfect score and 1 would be poor . In the qualitative analysis, consistent with previous feasibility studies, some clinician impressions were that FoVOx lacked modifiable and some specific static risk factors. Clinicians suggested missing static factors included oddness of the index offence, statutory supervision at the point of discharge, discharge to supported accommodation, other specific chronic diagnoses, and chronicity of past violence. Further work could investigate whether adding these additional factors could incrementally improve FoVOx accuracy. In relation to ‘oddness of the index crime’, although such offences have been studied and incorporated in criminal personality profiling since the 1970s and associated with some cases of autism spectrum disorder and psychosis , it has not to date been integrated as a static item in any violence risk instrument. Clinician respondents were generally more focused on adding modifiable factors, which is understandable given the need to provide interventions to reduce the risk of violent recidivism . Based on this and previous FoVOx feasibility studies, possible modifiable risk factors could include: current substance abuse, adherence and response to medication, impulsivity, recency of violence post-sentence (any recorded interpersonal violence on the inpatient ward, home or community after their index sentence date), insight, and psychosocial support and employment after discharge. This is also consistent with qualitative work about risk assessment more generally in forensic settings . However, one risk factor that has not been identified in qualitative work but reported in the current study is ‘recency of violence post-sentence’. Some of these clinical factors are contained in other risk assessment tools, such as FoxWeb which is based around 10 modifiable factors, and has been recently validated . As with FoVOx, FoxWeb is quick to complete, includes predictors that are reliably coded, and requires little training. Since it focuses only on modifiable predictors, the use of FoVOX and FoxWeb together would address clinician concerns about actuarial tools and enable risk monitoring over time. Future work could assess the feasibility of using two separate tools (or combining them) including testing whether the inclusion of new factors would incrementally improve the performance of FoVOx, and its acceptability among clinicians. Previous work has noted that adding certain clinical factors, such as the poor adherence, and psychosocial factors, such as community supervision, may increase the tool’s acceptability to clinicians . Apart from FoxWeb, the Structured Outcome Assessment and Community Risk Monitoring (SORM) was an attempt to continuously measure around 30 modifiable factors among forensic psychiatric patients and developed in Sweden. However, it has not maintained clinical use, possibly due to its complexity and lack of ongoing advocacy. Other work that has used Bayesian networks for risk assessment has yet to be tested and externally validated among forensic patients, and may also be too complicated for translation into practice. A central aim of working with forensic psychiatric patients is to reduce the risk of recidivism. One of the benefits of using tools such as FoVOx is that its brevity and ease of use frees up more time for risk management. In the future, trials could examine whether the implementation of scalable risk assessment tools improves outcomes, and whether incorporating the strategies identified above, such as improving adherence with treatment and facilitating meaningful daytime activity, prevents reoffending on discharge from secure hospital. Further work could investigate the role of more regular follow-up by clinical services or multidisciplinary review, enhancing medication adherence by optimising antipsychotic treatment and considering intramuscular administration , and offering psychological therapies to address substance misuse and other comorbidities . This may involve closer liaison between forensic and general adult community mental health services, along with substance misuse treatment providers to provide timely intervention.
One limitation is that any comparison of clinical judgement with risk assessment tools using thresholds depends on what clinicians understand that the categories low, medium and high mean. It will also need to consider that a statutory requirement for termination of forensic psychiatric care under special court supervision in Sweden is that there should not be any remaining risk of repeat offending of a serious nature, including violence against the person. In practice, this means that all discharged persons will be deemed low risk by clinical teams, and the FoVOx threshold of <5% may not reflect what clinicians mean by low risk. In contrast, most of the sample had medium risk scores (5–20% probability of repeat violent offending within 2 years), and if a threshold of <20% was used, then the concordance between FoVOx and clinical rating would have been nearly perfect. This discrepancy may also explain the variation between the risk of recidivism in our sample (median 7%) compared to the original sample from which the FoVOx tool was developed (11%), as only the lower risk cohort can actually be discharged (although caution is warranted in this interpretation as the numbers were small). Those posing a higher risk of recidivism will have remained in hospital. This variance may also be accounted by subtle changes in practice over time, and a move towards risk averseness. The number of repeat offenders in this pilot was small, and not sufficient for an external validation. Further research is warranted, including a larger updated validation of the Swedish forensic psychiatric rates of violent recidivism. In addition, this study only examined violent reoffending but multiple adverse outcomes should be considered on discharge. In particular, high rates of mortality have been reported in forensic patients . In conclusion, in this first feasibility study of Fovox in a Nordic country, we found using mixed-methods that the tool was acceptable, easy to use, positively impacted on decision-making, and could be used as a complement to current clinically-led approaches. The incremental utility of adding more modifiable factors is an area for future research.
Supplemental Material Click here for additional data file. Supplemental Material Click here for additional data file.
|
The role of natural experiments in hepatology research: filling the gap between clinical trials and service evaluations
|
a178961c-6434-4aa6-9288-c7ecb5ab6971
|
10109452
|
Internal Medicine[mh]
|
The European Association for the Study of the Liver–Lancet commission stresses the inconsistency in models of care for liver disease in Europe and the scarcity of programs delivering testing and treatment for early-stage disease. The commission highlights the enormous number of lives that could be saved if measures that address disease prevention and detection are properly validated and implemented. Both the European Association for the Study of the Liver–Lancet commission and field leaders in the US emphasize the need to study the “social determinants of liver disease” (eg, stigma, discrimination, and asymmetrical resources allocation ) if meaningful progress is to be made. Presently, the quantity and quality of interventional studies addressing upstream social determinants of health in gastroenterology and hepatology are described as “grim.” There are many barriers to conducting research in this area: (1) the causal relationship between social determinants of health and liver disease is convoluted and complex, (2) in the short term, intervention leads to “soft” nonclinical outcomes (eg, reduced alcohol intake), (3) interventions are often multimorbidity focused, and (4) potential research participants are predominantly in the community rather than hospital settings—limiting the accessibility of the research population to predominantly hospital-based hepatologists. An important additional contributory factor to this lack of evidence is our collective professional insistence on using clinical research methods to solve what are essentially public health problems. This leads to a lack of diversity in research and a particular lack of evidence for interventions targeting social determinants of liver health in marginalized and deprived populations—a lack of evidence that leads to a lack of spending and policy change. , The gold standard clinical experiment is the randomized controlled trial (RCT). An RCT has 4 defining features: (1) it includes 2 or more groups, (2) 1 or more group is assigned to a treatment or series of treatments, (3) subjects are randomly assigned to 1 group, and (4) the treatment can be manipulated by the researcher. The random assignment of the individuals to groups means that “on average,” they should have the same characteristics. Thus, statistically similar groups are exposed at the same time to 2 or more different conditions, which reduces or eliminates confounding and supports causal inferences. There are, however, many circumstances when an RCT is impossible and many cases when, even though an RCT is possible, such a trial has not been funded, has not been done and will not be done in a timescale that helps the policy maker or clinician. The challenges in using RCTs to evaluate complex interventions to overcome social determinants of health are well described, , and most strategic decisions—particularly in Public Health—are made without the benefit of evidence from an RCT. So, what else constitutes acceptable evidence? Figure (adapted from Ogilvie et al ) describes 2 pathways that lead to health policy change. The first (pathway A) includes RCTs and is more typical of the hospital-based system that is familiar to clinical hepatologists. Expert opinion and observational data are collected, collated, and presented. This leads to the development of an intervention, which is tested in an RCT and leads (usually with support from further trials, meta-analysis, and cost-effectiveness evaluation) to policy action. A recent example from clinical hepatology is the changing indications for carvedilol in patients with liver cirrhosis. Observational data indicated that beta-blockers should be effective at preventing decompensation in patients with clinically significant portal hypertension. , These studies led to an RCT that showed positive results, and this has started to alter international policy. The second pathway (pathway B) is more typical of public health and will be less familiar to clinical hepatologists. Expert opinion and observational data lead to policy change, policy action, and the implementation of an intervention. A good example of a widespread practice in clinical hepatology that lacks evidence from RCTs (with the exception of a study in China ) is HCC surveillance with liver ultrasound. Observational data about the relative incidence of HCC in patients with liver cirrhosis and expert opinion have led to the practice being recommended in international guidelines. , The impact of HCC surveillance has been evaluated in observational cohort studies that have compared outcomes for patients with HCC “exposed” to surveillance or presenting outside of surveillance. These studies are at risk of lead time bias and selection biases (including length-time bias) for which they have been partially adjusted. The results have been used to parameterize cost-effectiveness models and support the widespread implementation of surveillance. Despite the widespread implementation, some authors have advocated that there is still a need for an RCT, but others have highlighted the lack of acceptability, large sample sizes needed to demonstrate significant effects, and high study costs. – In 2014, the Centre for Disease Control in the US recommended cohort screening for HCV of the baby-boomer generation. This was a massive program that received high-level criticism calling for an RCT. However, the call was met with a response from the clinical community that indicated such a trial was unacceptable. Through online responses, other experts cited the high costs involved, the timescale required, and that modeling had already explored some of the uncertainties that would be addressed by a trial. In a similar example, NHS England has recently funded a widespread scale-up of community testing for early-stage liver disease. The program follows the recent publication of the NHS long-term plan and a political focus on early identification of disease—specifically cancer. In keeping with pathway B in Figure , the policy has led to rapid implementation without utilizing the evidence-generation steps in pathway A. What can help clinicians decide whether interventions implemented into practice without passing through the traditional hierarchy of medical evidence is the right thing for their patients and the communities they look after? As we have highlighted, observational data can help but are subject to biases that limit causal inferences. In the remainder of this article, we will discuss how natural experimental studies (henceforth abbreviated to NES)—sitting somewhere between experimental and observational research methods—can help. We describe this method in detail for the clinical audience of this journal because we believe NES are key to better evaluations of large-scale health interventions for patients at risk of, or with liver disease outside of the hospital walls. Unlike other research methods, they are undertaught and underutilized. What are NES? To illustrate what we mean by NES, we will work through historical, famous, widely cited, but infrequently fully explained examples of Public Health research. It is well known that in 1854 John Snow identified the source of cholera outbreaks in London, UK, and undertook a simple Public Health intervention—he is famously credited with removing the handle from the Broad Street water pump—thereby cutting off a key source of contaminated water. However, the study design John Snow used to draw his conclusions is less well known. Sometime before his study, 1 of the 2 water companies serving London situated their intake pipe in the River Thames upstream of the city in (what turned out to be) less contaminated water. The other company continued to take water from the Thames as it ran through the city. To test his hypothesis that cholera was waterborne, John Snow looked at cholera cases in households served by each water company. He noted that the incidence of cholera in households served by the downstream water company was 10 times that of households served by the company with the upstream source. John Snow recognized the risk of bias and worked hard to prove that the supply of water to each household was not associated with other factors that could be associated with cholera (ie, confounders). In fact, he was able to show that the supply of water was almost random: many households were unaware of which water company they used, and neighboring houses were often served by different companies. In his study, John Snow highlighted the “rules” that now define NES , The “intervention” (in this case a change in water pipe location) should be outside of the researchers’ control, the allocation of the intervention should be “as if” random or at the very least variation in exposure should be unrelated to factors that may influence the outcome , , and the experiment should be relevant to current health policy/service decisions. Crucially, it should be possible for causal inferences to be drawn from the study. We will return to these rules again when we evaluate examples of NES in hepatology research. Ground rules that define a natural experiment 1. Researchers lack control over the implementation of the intervention 2. Variation in exposure to the intervention should be unrelated to the outcome such that causal inference can be drawn 3. The intervention should be relevant to public health/health service decisions Some authors have contended this relatively straightforward definition of NES, summarized by the Medical Research Council and , does not capture their full complexity. Dawson et al classify NES into type 1 and type 2 (Figure ). Type 1 fits most closely with the MRC definition and the examples we have already discussed—researchers have no control over the implementation and exposure to the intervention. In type 2, researchers may have some control. For example, they could influence how and where a health intervention is being deployed to influence the seminatural formation of groups. Type 2 NES get close in structure to quasi-experimental designs, which are, in turn, closer to the RCT design (Figure ). The term “quasi-experiment” is often used interchangeably with natural experiment, and there remains debate in the literature over their exact definitions. Generally, quasi-experiments are recognized to include designs where the researcher has full control of the intervention, but there is still an absence of control over randomization and hence would not meet the rules of the definition of NES . A good example of a quasi-experimental study was when uptake of a researcher-led intervention relies on volunteers (forming the intervention “arm”) with people who do not volunteer to become a control group. In this example, very careful consideration needs to be given to controlling for potential confounders that are associated with the act of volunteering and the outcome of interest. , NES have strengths over other study designs: they can evaluate the effect of events or interventions that are impossible to manipulate experimentally, interventions are generally less distorted than in strict experimental conditions, and control groups are less likely to alter their normal behaviors. In addition, NES can be used with retrospective data and are less susceptible to confounding than conventional observational designs. Accordingly, NES can provide strong causal information with large effect sizes that are comparable in some circumstances to randomized designs (Figure ). However, to do this, NES need to be carefully planned, well conducted, and accurately reported. Examples of NES in hepatology NES have been widely used in global health care–related research with a broad range of examples, including interventions aimed at reducing gun fatalities in the US, improving road safety, , improving maternal health, reducing suicide with pesticides, and reducing cycling accidents. We will now consider a few examples of where NES have been used in studies relating to liver disease or the direct risks of liver disease (Table ). In keeping with the recommendations in the recent European Association for the Study of the Liver–Lancet Commission and its previous editions, these studies have an appropriate focus on early identification or prevention of liver disease in community settings. Concerns about overburdening stretched hepatology services have led to novel pathway designs that stratify patients as “high risk” for significant liver disease before a referral is made (for an overview of novel pathways, see Abeysekera et al ). A good example is Srivastava et al published in 2019. This article has had impact with over 200 citations in 3 years. In the study, the authors compared the proportion of significant liver disease in patients referred to the hospital through a novel pathway with others that were referred without the novel pathway and showed that the pathway significantly reduced unnecessary referrals. The study broadly meets the “rules” for a NES (Table ). The study met an important clinical/public health concern; the researchers lacked control over the implementation of the intervention, circumstance dictated which population was exposed, and there was a reasonable argument that the exposed and unexposed groups were broadly similar. Our second and third examples describe interventions to enhance HCV treatment engagement in people who inject drugs (PWIDs). In both, the populations who are exposed to the intervention live in areas where there has been the early implementation of enhanced services for HCV treatment, and the “control” or unexposed populations live in areas with slow adoption of the interventions. Hickman and colleagues describe the study protocol for the Epitope study (results unpublished at the time of writing). They compare the prevalence of HCV in the Tayside area of Scotland to other parts of Scotland where HCV services for PWID were in their relative infancy. Jugnarain et al describe the impact of peer-supported engagement with HCV treatment in PWID living in areas of England where peer support has been implemented and compare the number starting and completing treatment with areas that have not started a peer-supported program. They observed a significant increase in the rates of treatment initiation and contended that this was unlikely to be due to hidden confounders: “given the magnitude of the change and the large number of networks involved it is difficult to envisage a common confounding factor that could have led to the changes we observed.” Our final example tested the impact of the implementation of the minimum unit alcohol pricing policy in Scotland. In many respects, this is a “classic” NES. Observational data describing the association between cost and consumption led directly to a policy change. Evaluation of the impact then relied on observational data and NES. O’Donnell et al compared the amount spent per household on alcohol in Scotland and England (where the policy was not implemented) and separately in northern areas of England—to control for “cross-border contamination.” The authors showed an immediate drop in alcohol purchasing in Scotland and no comparable decrease in England. The authors summarized the rationale and strength of their natural experiment: “although the randomised controlled trial remains the ideal research standard, interrupted time series analysis provides a strong alternative where an experimental study design is infeasible or unethical, such as the evaluation of policy initiatives in healthcare.” Design and analysis in NES By definition, in NES, the researcher has little or no influence over exposure to the intervention. In all NES, exposure to the intervention is therefore at risk of selection bias as the implementation is very rarely completely random—an exception may be a study that compares lottery winners to members of the general population. Selection bias becomes a problem when it leads to confounding. A confounder is a covariate associated with the intervention and the outcome of interest. Figure A illustrates this as a directed acyclic graph, and as an example, Figure B illustrates how observed and unobserved differences (covariates) between patients with cirrhosis exposed and unexposed to HCC surveillance could lead to confounding in observational studies evaluating its effectiveness. The study design and analytical approach taken should be the best available to mitigate the effect of selection bias and confounding on the outcome. There are many approaches to maximize causal inferences in NES, which in many instances equally apply to observational and randomized designs. Broadly speaking, these approaches fall into 2 groups—those designed to deal with recorded covariates and those designed to deal with things the researcher does not know about the study population (see Figure B for an example). We summarize the approaches in Table and highlight how our examples of NES in hepatology research have maximized causal inferences in the following text. A more comprehensive overview of different approaches to maximize causal inferences is available elsewhere. , Srivastava and colleagues compared patients referred through a novel service pathway to patients referred from other areas in London (UK), where the pathway had not been implemented. The results are presented as an “Odds” that patients seen in the clinic will have significant fibrosis/cirrhosis—that is, are they appropriate referrals? The results were positive with patients referred from General Practice (GP) with the novel pathway being more likely to have a significant disease; however, it is unlikely the patients coming from the 2 areas are exactly the same, that is, there will be some selection bias in exposure to the novel pathway. Had this same study been an RCT, the unit of randomization would have been GP practices. A confounder would therefore arise from a variable associated with GP services in one area that is associated with the outcome of interest (Figure ). For example, an education program aimed at GPs in the intervention area could have improved the appropriateness of referral independently of the new pathway. To support their assertion that the new pathway (rather than hidden confounders) caused the improved selection of patients referred to secondary care, the authors conducted a supplementary analysis. Further analysis showed a significantly increased proportion of appropriate referrals within the intervention area if the novel pathway was followed compared with those where it was not. However, in their analysis, Srivastava and colleagues do not account for background trends in the primary outcome. When outcomes are analyzed discreetly, underlying trends are unaccounted for, this can lead to misleading results. For example, the development of the intervention with community partners could have led to a change of behavior in referring primary care physicians before the novel pathway was introduced. The observed effect could have been a continuation of this behavior change after the pathway was introduced rather than an effect of the pathway itself. Figure A and B show 2 hypothetical time series data of time ( x -axis) and a percentage ( y -axis). In Figure A, the mean monthly percentage is 51% before the intervention versus 70% after. In Figure B, the means are 51% versus 74%, respectively. If just considering mean proportions before and after the intervention, we may determine that it was effective in both scenarios. However, the benefit of examining the trends in Figure is clear—we can see the evidence of an intervention effect in Figure A and no effect in Figure B. Interrupted time series (ITS) is a common analytical approach in NES (Table ). A review in 2019 identified over 200 articles that reported using ITS in a health care setting (although only 116 met the full inclusion criteria for the review). As per our example, (Figure ) in an ITS, equally spaced data points are compared before and after the intervention (the interruption) is implemented. To conduct ITS analysis, a large number (typically at least 8) of data points are needed before and after the interruption. Regression modeling is used to estimate the underlying trend in the preinterruption data and consequently the expected trend if the interruption had not occurred, what is termed the “counterfactual.” The counterfactual is a comparator for the observed postinterruption data to examine whether the interruption had an effect significantly different from the expected trend. In doing so, the ITS design controls for any pre-existing trends in the data. However, ITS can still give misleading results: the before and after populations may not have the same characteristics, time may have affected the primary outcome independently of the tested intervention, and hidden environmental confounders that cannot be adjusted for may have altered the observed trends. The addition of a group that is unexposed to the intervention adds validity by controlling for hidden confounding. O’Donnell et al (Table ) conducted a controlled ITS. Two control groups were used, the whole of England and a sub-group that just included those in Northern England. Figure C and D illustrate the benefit of a control group using hypothetical data. Figure C illustrates similar effects in the control and intervention time series, indicating that a confounder—common to both groups—rather than the intervention is increasing the percentages. In Figure D, we see an absence of change in the control time series, supporting the assertion that the observed effect is a result of the intervention. The control group needs to be carefully chosen. One needs to be confident that the control group is exposed to the same environmental influences as the intervention group—except for the intervention itself—and be confident that the control group cannot be affected by the intervention through contamination. In their study protocol, Hickman and colleagues describe their intention to use an adapted causal impact synthetic control model to assess the impact of changing service design on HCV. The synthetic control population is based on preintervention population characteristics and provides a counterfactual trend against which the impact of the intervention can be compared. The use of a synthetic control population has the advantage of being less subjective and should ensure it is more representative of the wider population. Conducting and reporting NES One of our selected studies (Table ) presents a protocol. The Medical Research Council (MRC) and others recommend the publication of a study protocol in advance of conducting NES. Otherwise, there is a risk of a blurring of intended target populations, outcomes, and analytical approaches. , Alongside the robust approaches to assess causal inference we have described, a published a priori protocol adds validity to the findings and has the potential to broaden the acceptability of NES as admissible evidence for causation. For reference, a detailed framework of what to include in the protocol has been recently published. In their study, O’Donnell and colleagues used a recognized reporting guideline. , The reporting guideline they used is specific to studies using an ITS design and describes 8 quality criteria. The first 4 criteria relate to the general quality of NES, and the remainder is specific to ITS. Alternatively, other authors recommend using the TREND guideline. These were developed by the Centre for Disease Control in the US to improve the quality of studies testing interventions designed to tackle the HIV epidemic and were modeled on the EQUATOR guidelines for RCTs. The TREND guidelines are now widely used, frequently requested by journal editors, and are specific for studies that evaluate interventions using nonrandomized designs. The TREND checklist includes 5 sections; many subsections are more applicable to quasi-experiments as they assume the researcher has control over the intervention and (nonrandomized) allocation of participants. The MRC gives an adapted, brief, and more specific summary of what should be reported in NES to convey validity (Table ). Ethical considerations in NES We argue that the use of NES in hepatology will help physicians adhere to the World Medical Association Declaration of Helsinki, specifically natural experiments will serve to enhance equity of access for disadvantaged and marginalized populations to health research and provide a means to test unproven interventions that have been implemented into practice. Other aspects of the declaration are also important when planning and conducting a natural experiment. Although the intervention is largely or totally outside of the researcher’s control, the physician-researcher still has obligations to prevent harm occurring to participants. This is more complex than in an RCT or quasi-experiment. Consider Jugnarain et al in Table . What if the peer-support program had been unexpectedly associated with reduced engagement with HCV treatment or the researchers observed unanticipated negative effects—so-called adventitious harms? The research team would have been ethically obliged to meet with commissioners, publish and publicize their findings and encourage consideration about the suspension of the service. However, the ability of a researcher to act to prevent harm in NES is usually limited. The analysis of a NES is typically conducted well after the intervention has been implemented (as in all of the examples we cite above)—therefore, the findings of the study cannot alter exposures that have already taken place. Research participants should always give informed consent for data collection and, in the case of RCTs and quasi-experiments, allocation/randomization to an intervention or control group. In NES, the intervention is outside of the researchers control so there is not a need to collect informed consent for this; however, ethical approval is still required for the collection and use of data about the participants unless it is aggregated, anonymized, and in public domain. The future of NES in hepatology In this review, we have described 2 pathways that lead to health policy action. One relies on the conventional hierarchy of evidence before the implementation of an intervention. The second relies on post hoc analysis. We have highlighted 3 examples of hepatology clinical practice that have followed this second pathway, including HCC surveillance program, baby-boomer screening for HCV, and a community program to identify compensated liver cirrhosis and advanced fibrosis. Importantly, these programs are being implemented alongside electronic health records and accessible “big data.” A reliance on conventional observational research designs to use this data and evaluate these programs has limitations. NES go some way to addressing these limitations, and we hope this article will provoke thought and debate about how they could be applied. Consider baby-boomer screening for HCV, which was recommended in 2012. Can NES address some of the concerns raised by Koretz et al about the effectiveness of the program? If the implementation of screening was asymmetrical (eg, between the US states), did naturally occurring exposed and unexposed populations take shape that is sufficiently similar and large enough to observe relative liver transplantation rates or death in the years that followed? To address the upstream determinants of liver-related morbidity and mortality, the field of hepatology is moving toward a focus on large-scale public health interventions. Relatively cheap and safe interventions are being deployed in community settings. We argue NES are needed to test the effectiveness of these interventions, and the hepatology community needs to familiarize itself with their design, strengths, and limitations.
To illustrate what we mean by NES, we will work through historical, famous, widely cited, but infrequently fully explained examples of Public Health research. It is well known that in 1854 John Snow identified the source of cholera outbreaks in London, UK, and undertook a simple Public Health intervention—he is famously credited with removing the handle from the Broad Street water pump—thereby cutting off a key source of contaminated water. However, the study design John Snow used to draw his conclusions is less well known. Sometime before his study, 1 of the 2 water companies serving London situated their intake pipe in the River Thames upstream of the city in (what turned out to be) less contaminated water. The other company continued to take water from the Thames as it ran through the city. To test his hypothesis that cholera was waterborne, John Snow looked at cholera cases in households served by each water company. He noted that the incidence of cholera in households served by the downstream water company was 10 times that of households served by the company with the upstream source. John Snow recognized the risk of bias and worked hard to prove that the supply of water to each household was not associated with other factors that could be associated with cholera (ie, confounders). In fact, he was able to show that the supply of water was almost random: many households were unaware of which water company they used, and neighboring houses were often served by different companies. In his study, John Snow highlighted the “rules” that now define NES , The “intervention” (in this case a change in water pipe location) should be outside of the researchers’ control, the allocation of the intervention should be “as if” random or at the very least variation in exposure should be unrelated to factors that may influence the outcome , , and the experiment should be relevant to current health policy/service decisions. Crucially, it should be possible for causal inferences to be drawn from the study. We will return to these rules again when we evaluate examples of NES in hepatology research. Ground rules that define a natural experiment 1. Researchers lack control over the implementation of the intervention 2. Variation in exposure to the intervention should be unrelated to the outcome such that causal inference can be drawn 3. The intervention should be relevant to public health/health service decisions Some authors have contended this relatively straightforward definition of NES, summarized by the Medical Research Council and , does not capture their full complexity. Dawson et al classify NES into type 1 and type 2 (Figure ). Type 1 fits most closely with the MRC definition and the examples we have already discussed—researchers have no control over the implementation and exposure to the intervention. In type 2, researchers may have some control. For example, they could influence how and where a health intervention is being deployed to influence the seminatural formation of groups. Type 2 NES get close in structure to quasi-experimental designs, which are, in turn, closer to the RCT design (Figure ). The term “quasi-experiment” is often used interchangeably with natural experiment, and there remains debate in the literature over their exact definitions. Generally, quasi-experiments are recognized to include designs where the researcher has full control of the intervention, but there is still an absence of control over randomization and hence would not meet the rules of the definition of NES . A good example of a quasi-experimental study was when uptake of a researcher-led intervention relies on volunteers (forming the intervention “arm”) with people who do not volunteer to become a control group. In this example, very careful consideration needs to be given to controlling for potential confounders that are associated with the act of volunteering and the outcome of interest. , NES have strengths over other study designs: they can evaluate the effect of events or interventions that are impossible to manipulate experimentally, interventions are generally less distorted than in strict experimental conditions, and control groups are less likely to alter their normal behaviors. In addition, NES can be used with retrospective data and are less susceptible to confounding than conventional observational designs. Accordingly, NES can provide strong causal information with large effect sizes that are comparable in some circumstances to randomized designs (Figure ). However, to do this, NES need to be carefully planned, well conducted, and accurately reported.
NES have been widely used in global health care–related research with a broad range of examples, including interventions aimed at reducing gun fatalities in the US, improving road safety, , improving maternal health, reducing suicide with pesticides, and reducing cycling accidents. We will now consider a few examples of where NES have been used in studies relating to liver disease or the direct risks of liver disease (Table ). In keeping with the recommendations in the recent European Association for the Study of the Liver–Lancet Commission and its previous editions, these studies have an appropriate focus on early identification or prevention of liver disease in community settings. Concerns about overburdening stretched hepatology services have led to novel pathway designs that stratify patients as “high risk” for significant liver disease before a referral is made (for an overview of novel pathways, see Abeysekera et al ). A good example is Srivastava et al published in 2019. This article has had impact with over 200 citations in 3 years. In the study, the authors compared the proportion of significant liver disease in patients referred to the hospital through a novel pathway with others that were referred without the novel pathway and showed that the pathway significantly reduced unnecessary referrals. The study broadly meets the “rules” for a NES (Table ). The study met an important clinical/public health concern; the researchers lacked control over the implementation of the intervention, circumstance dictated which population was exposed, and there was a reasonable argument that the exposed and unexposed groups were broadly similar. Our second and third examples describe interventions to enhance HCV treatment engagement in people who inject drugs (PWIDs). In both, the populations who are exposed to the intervention live in areas where there has been the early implementation of enhanced services for HCV treatment, and the “control” or unexposed populations live in areas with slow adoption of the interventions. Hickman and colleagues describe the study protocol for the Epitope study (results unpublished at the time of writing). They compare the prevalence of HCV in the Tayside area of Scotland to other parts of Scotland where HCV services for PWID were in their relative infancy. Jugnarain et al describe the impact of peer-supported engagement with HCV treatment in PWID living in areas of England where peer support has been implemented and compare the number starting and completing treatment with areas that have not started a peer-supported program. They observed a significant increase in the rates of treatment initiation and contended that this was unlikely to be due to hidden confounders: “given the magnitude of the change and the large number of networks involved it is difficult to envisage a common confounding factor that could have led to the changes we observed.” Our final example tested the impact of the implementation of the minimum unit alcohol pricing policy in Scotland. In many respects, this is a “classic” NES. Observational data describing the association between cost and consumption led directly to a policy change. Evaluation of the impact then relied on observational data and NES. O’Donnell et al compared the amount spent per household on alcohol in Scotland and England (where the policy was not implemented) and separately in northern areas of England—to control for “cross-border contamination.” The authors showed an immediate drop in alcohol purchasing in Scotland and no comparable decrease in England. The authors summarized the rationale and strength of their natural experiment: “although the randomised controlled trial remains the ideal research standard, interrupted time series analysis provides a strong alternative where an experimental study design is infeasible or unethical, such as the evaluation of policy initiatives in healthcare.”
By definition, in NES, the researcher has little or no influence over exposure to the intervention. In all NES, exposure to the intervention is therefore at risk of selection bias as the implementation is very rarely completely random—an exception may be a study that compares lottery winners to members of the general population. Selection bias becomes a problem when it leads to confounding. A confounder is a covariate associated with the intervention and the outcome of interest. Figure A illustrates this as a directed acyclic graph, and as an example, Figure B illustrates how observed and unobserved differences (covariates) between patients with cirrhosis exposed and unexposed to HCC surveillance could lead to confounding in observational studies evaluating its effectiveness. The study design and analytical approach taken should be the best available to mitigate the effect of selection bias and confounding on the outcome. There are many approaches to maximize causal inferences in NES, which in many instances equally apply to observational and randomized designs. Broadly speaking, these approaches fall into 2 groups—those designed to deal with recorded covariates and those designed to deal with things the researcher does not know about the study population (see Figure B for an example). We summarize the approaches in Table and highlight how our examples of NES in hepatology research have maximized causal inferences in the following text. A more comprehensive overview of different approaches to maximize causal inferences is available elsewhere. , Srivastava and colleagues compared patients referred through a novel service pathway to patients referred from other areas in London (UK), where the pathway had not been implemented. The results are presented as an “Odds” that patients seen in the clinic will have significant fibrosis/cirrhosis—that is, are they appropriate referrals? The results were positive with patients referred from General Practice (GP) with the novel pathway being more likely to have a significant disease; however, it is unlikely the patients coming from the 2 areas are exactly the same, that is, there will be some selection bias in exposure to the novel pathway. Had this same study been an RCT, the unit of randomization would have been GP practices. A confounder would therefore arise from a variable associated with GP services in one area that is associated with the outcome of interest (Figure ). For example, an education program aimed at GPs in the intervention area could have improved the appropriateness of referral independently of the new pathway. To support their assertion that the new pathway (rather than hidden confounders) caused the improved selection of patients referred to secondary care, the authors conducted a supplementary analysis. Further analysis showed a significantly increased proportion of appropriate referrals within the intervention area if the novel pathway was followed compared with those where it was not. However, in their analysis, Srivastava and colleagues do not account for background trends in the primary outcome. When outcomes are analyzed discreetly, underlying trends are unaccounted for, this can lead to misleading results. For example, the development of the intervention with community partners could have led to a change of behavior in referring primary care physicians before the novel pathway was introduced. The observed effect could have been a continuation of this behavior change after the pathway was introduced rather than an effect of the pathway itself. Figure A and B show 2 hypothetical time series data of time ( x -axis) and a percentage ( y -axis). In Figure A, the mean monthly percentage is 51% before the intervention versus 70% after. In Figure B, the means are 51% versus 74%, respectively. If just considering mean proportions before and after the intervention, we may determine that it was effective in both scenarios. However, the benefit of examining the trends in Figure is clear—we can see the evidence of an intervention effect in Figure A and no effect in Figure B. Interrupted time series (ITS) is a common analytical approach in NES (Table ). A review in 2019 identified over 200 articles that reported using ITS in a health care setting (although only 116 met the full inclusion criteria for the review). As per our example, (Figure ) in an ITS, equally spaced data points are compared before and after the intervention (the interruption) is implemented. To conduct ITS analysis, a large number (typically at least 8) of data points are needed before and after the interruption. Regression modeling is used to estimate the underlying trend in the preinterruption data and consequently the expected trend if the interruption had not occurred, what is termed the “counterfactual.” The counterfactual is a comparator for the observed postinterruption data to examine whether the interruption had an effect significantly different from the expected trend. In doing so, the ITS design controls for any pre-existing trends in the data. However, ITS can still give misleading results: the before and after populations may not have the same characteristics, time may have affected the primary outcome independently of the tested intervention, and hidden environmental confounders that cannot be adjusted for may have altered the observed trends. The addition of a group that is unexposed to the intervention adds validity by controlling for hidden confounding. O’Donnell et al (Table ) conducted a controlled ITS. Two control groups were used, the whole of England and a sub-group that just included those in Northern England. Figure C and D illustrate the benefit of a control group using hypothetical data. Figure C illustrates similar effects in the control and intervention time series, indicating that a confounder—common to both groups—rather than the intervention is increasing the percentages. In Figure D, we see an absence of change in the control time series, supporting the assertion that the observed effect is a result of the intervention. The control group needs to be carefully chosen. One needs to be confident that the control group is exposed to the same environmental influences as the intervention group—except for the intervention itself—and be confident that the control group cannot be affected by the intervention through contamination. In their study protocol, Hickman and colleagues describe their intention to use an adapted causal impact synthetic control model to assess the impact of changing service design on HCV. The synthetic control population is based on preintervention population characteristics and provides a counterfactual trend against which the impact of the intervention can be compared. The use of a synthetic control population has the advantage of being less subjective and should ensure it is more representative of the wider population.
One of our selected studies (Table ) presents a protocol. The Medical Research Council (MRC) and others recommend the publication of a study protocol in advance of conducting NES. Otherwise, there is a risk of a blurring of intended target populations, outcomes, and analytical approaches. , Alongside the robust approaches to assess causal inference we have described, a published a priori protocol adds validity to the findings and has the potential to broaden the acceptability of NES as admissible evidence for causation. For reference, a detailed framework of what to include in the protocol has been recently published. In their study, O’Donnell and colleagues used a recognized reporting guideline. , The reporting guideline they used is specific to studies using an ITS design and describes 8 quality criteria. The first 4 criteria relate to the general quality of NES, and the remainder is specific to ITS. Alternatively, other authors recommend using the TREND guideline. These were developed by the Centre for Disease Control in the US to improve the quality of studies testing interventions designed to tackle the HIV epidemic and were modeled on the EQUATOR guidelines for RCTs. The TREND guidelines are now widely used, frequently requested by journal editors, and are specific for studies that evaluate interventions using nonrandomized designs. The TREND checklist includes 5 sections; many subsections are more applicable to quasi-experiments as they assume the researcher has control over the intervention and (nonrandomized) allocation of participants. The MRC gives an adapted, brief, and more specific summary of what should be reported in NES to convey validity (Table ).
We argue that the use of NES in hepatology will help physicians adhere to the World Medical Association Declaration of Helsinki, specifically natural experiments will serve to enhance equity of access for disadvantaged and marginalized populations to health research and provide a means to test unproven interventions that have been implemented into practice. Other aspects of the declaration are also important when planning and conducting a natural experiment. Although the intervention is largely or totally outside of the researcher’s control, the physician-researcher still has obligations to prevent harm occurring to participants. This is more complex than in an RCT or quasi-experiment. Consider Jugnarain et al in Table . What if the peer-support program had been unexpectedly associated with reduced engagement with HCV treatment or the researchers observed unanticipated negative effects—so-called adventitious harms? The research team would have been ethically obliged to meet with commissioners, publish and publicize their findings and encourage consideration about the suspension of the service. However, the ability of a researcher to act to prevent harm in NES is usually limited. The analysis of a NES is typically conducted well after the intervention has been implemented (as in all of the examples we cite above)—therefore, the findings of the study cannot alter exposures that have already taken place. Research participants should always give informed consent for data collection and, in the case of RCTs and quasi-experiments, allocation/randomization to an intervention or control group. In NES, the intervention is outside of the researchers control so there is not a need to collect informed consent for this; however, ethical approval is still required for the collection and use of data about the participants unless it is aggregated, anonymized, and in public domain.
In this review, we have described 2 pathways that lead to health policy action. One relies on the conventional hierarchy of evidence before the implementation of an intervention. The second relies on post hoc analysis. We have highlighted 3 examples of hepatology clinical practice that have followed this second pathway, including HCC surveillance program, baby-boomer screening for HCV, and a community program to identify compensated liver cirrhosis and advanced fibrosis. Importantly, these programs are being implemented alongside electronic health records and accessible “big data.” A reliance on conventional observational research designs to use this data and evaluate these programs has limitations. NES go some way to addressing these limitations, and we hope this article will provoke thought and debate about how they could be applied. Consider baby-boomer screening for HCV, which was recommended in 2012. Can NES address some of the concerns raised by Koretz et al about the effectiveness of the program? If the implementation of screening was asymmetrical (eg, between the US states), did naturally occurring exposed and unexposed populations take shape that is sufficiently similar and large enough to observe relative liver transplantation rates or death in the years that followed? To address the upstream determinants of liver-related morbidity and mortality, the field of hepatology is moving toward a focus on large-scale public health interventions. Relatively cheap and safe interventions are being deployed in community settings. We argue NES are needed to test the effectiveness of these interventions, and the hepatology community needs to familiarize itself with their design, strengths, and limitations.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.